Next Article in Journal
A Comprehensive Framework for Measuring the Immediate Impact of TV Advertisements: TV-Impact
Previous Article in Journal
Entropy of the Canonical Occupancy (Macro) State in the Quantum Measurement Theory
Previous Article in Special Issue
Neural Causal Information Extractor for Unobserved Causes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies

by
Bing Yuan
1,†,
Jiang Zhang
1,2,*,†,
Aobo Lyu
3,
Jiayun Wu
4,
Zhipeng Wang
2,
Mingzhe Yang
2,
Kaiwei Liu
2,
Muyun Mou
2 and
Peng Cui
4
1
Swarma Research, Beijing 100085, China
2
School of Systems Sciences, Beijing Normal University, Beijing 100875, China
3
Department of Electrical and Systems Engineering, Washington University, St. Louis, MO 63130, USA
4
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2024, 26(2), 108; https://doi.org/10.3390/e26020108
Submission received: 7 November 2023 / Revised: 16 January 2024 / Accepted: 18 January 2024 / Published: 24 January 2024
(This article belongs to the Special Issue Causality and Complex Systems)

Abstract

:
Emergence and causality are two fundamental concepts for understanding complex systems. They are interconnected. On one hand, emergence refers to the phenomenon where macroscopic properties cannot be solely attributed to the cause of individual properties. On the other hand, causality can exhibit emergence, meaning that new causal laws may arise as we increase the level of abstraction. Causal emergence (CE) theory aims to bridge these two concepts and even employs measures of causality to quantify emergence. This paper provides a comprehensive review of recent advancements in quantitative theories and applications of CE. It focuses on two primary challenges: quantifying CE and identifying it from data. The latter task requires the integration of machine learning and neural network techniques, establishing a significant link between causal emergence and machine learning. We highlight two problem categories: CE with machine learning and CE for machine learning, both of which emphasize the crucial role of effective information (EI) as a measure of causal emergence. The final section of this review explores potential applications and provides insights into future perspectives.

1. Introduction

Economic growth, environmental protection, sustainable development, the global climate crisis, social inequality, and many other issues are all intertwined with complex systems [1,2]. Therefore, gaining a deep understanding of how complex systems operate, evolve, grow, stabilize, and collapse is of paramount importance. However, this task is exceptionally challenging due to the fact that complex systems consist of diverse and heterogeneous agents that interact through complex nonlinear relationships [3]. Moreover, they all exhibit emergent phenomena, which are highly common in complex systems but carry a sense of mystery [4].
How did the first living cell emerge from the collisions between various molecules in the Earth’s early environment [5]? How does the cognitive concept of “I” emerge from the intricate interactions among countless neurons in our brain [6]? How do large neural language models suddenly exhibit emergent abilities [7]? These fundamental questions revolve around the concept of emergence in complex living, cognitive, and artificial systems. Emergence refers to the phenomenon where macroscopic properties and phenomena cannot be solely attributed to or explained by the properties of individual components [4,8,9,10]. This presents a formidable challenge to the traditional reductionist perspective while also shedding light on the underlying reasons for the enigmatic nature of emergent phenomena.
However, as elucidated by Bedau’s theory of weak emergence [10], many emergent phenomena can be comprehended through the interactions among the individuals within the system [4]. Complex systems, in fact, consist of extensive networks of interacting components [11]. Within these networks, even a minor cause, such as a perturbation of a single unit, can propagate through the interconnected network, resulting in a collective effect. The phenomenon described is commonly known as the butterfly effect [12], which provides an explanation for the occurrence of emergence. On the other hand, emergent properties, such as homeostasis [13], can stabilize the system itself, thereby preserving the locality of causal effects and preventing the observation of macro-level effects. These phenomena demonstrate that complex systems achieve interactions through causal laws, where numerous local causal laws form interconnected causal networks as a whole. And this whole possesses unique causal characteristics.
Causality, or causation, refers to the connection between a cause and its resulting effect [14,15,16]. It describes the phenomenon in which an event, known as the cause, leads to another event, known as the effect. Traditional studies of causality have typically focused on the causal relationship between two or a few variables. However, the unique characteristics of causality in complex systems present new challenges to classical causal science due to the vast number of variables involved and the presence of emergent phenomena. In complex systems, it is possible for one cause to have multiple effects, and conversely, one effect may be influenced by a multitude of causes. Furthermore, in complex systems, causality often exhibits cross-level properties, which are closely associated with emergence.
Emergence and causation interconnect with each other. On one hand, emergence is the causal effect of complex and nonlinear interactions between components in complex systems [8,9]. On the other hand, emergent properties may have causal effects on individuals in complex systems [4,17]. For example, the price of fossil fuel is the emerging result of the interactions between buyers and sellers in the market. At the same time, the price may also provide feedback to the market: it can affect the decision making of each individual.
Furthermore, we can understand emergence through the perspective of causation. What emergence means is that some phenomena and properties on the macroscopic level can not be attributed to the microscopic properties [18]. Thus, emergent properties or phenomena lose their direct explanations as usual but may be attributed to the causes on the macro-level, as pointed out by [19]. Therefore, new causalities can be observed on larger scales.
In conclusion, gaining a deep understanding of emergence is crucial in the field of complex system studies. Specifically, the development of a quantitative theory of emergence is on the verge of emerging. Such a theory holds the potential to address significant challenges, including the origins of life [20], the emergence of novel capabilities in large neural network models [7], and the potential for intelligence, consciousness, and free will to arise in artificial systems [21]. Causality not only exhibits a profound connection with emergence but is also considered by many researchers as one of the most crucial perspectives for quantitatively comprehending emergence [18,19,22,23].
Two primary challenges take precedence in understanding emergence from a causal perspective. The first is establishing a quantitative definition of emergence, whereas the second involves identifying emergent behaviors or phenomena through data analysis.
To address the first challenge, two prominent quantitative theories of emergence have emerged in the past decade. The first is Erik Hoel et al.’s theory of causal emergence [19], whereas the second is Fernando E. Rosas et al.’s theory of emergence based on partial information decomposition [24].
Hoel et al.’s theory of causal emergence specifically addresses complex systems that are modeled using Markov chains. It employs the concept of effective information (EI) to quantify the extent of causal influence within Markov chains and enables comparisons of EI values across different scales [19,25]. Causal emergence is defined by the difference in the EI values between the macro-level and micro-level. Several examples of discrete Markov chains have been shown to exhibit causal emergence if their EI values of macro-level dynamics are larger than those of micro-level dynamics. Hoel and other researchers further extended the measures of effective information and causal emergence for dynamical systems with continuous variables [26] and complex networks [27]. Other measures are also possible for quantifying causal effects and, consequently, causal emergence. In [28], Comolatti and Hoel systematically compared several causal effect measures and concluded that causal emergence is independent of the selection of the measure.
However, in Hoel’s theory of causal emergence, it is essential to establish a coarse-graining strategy beforehand. Alternatively, the strategy can be derived by maximizing the effective information (EI) [19]. However, this task becomes challenging for large-scale systems due to the computational complexity involved. To address these problems, Rosas et al. introduced a new quantitative definition of causal emergence [24] that does not depend on coarse-graining methods, drawing from partial information decomposition (PID)-related theory. PID is an approach developed by Williams et al., which seeks to decompose the mutual information between a target and source variables into non-overlapping information atoms: unique, redundant, and synergistic information [29]. Based on this groundwork, Rosas further developed the concept and introduced a theory called ϕ ID to decompose the mutual information between multiple target and source variables [30]. This framework provides a quantitative definition of causal emergence by measuring the positive synergy information between the source and target variables based on the inherent characteristics of the system.
The second challenge pertains to the identification of emergence from data. In an effort to address this issue, Rosas et al. derived a numerical method [24]. However, it is important to acknowledge that this method offers only a sufficient condition for emergence and is an approximate approach. Another limitation is that a coarse-grained macro-state variable should be given beforehand to apply this method. Hence, there is a need for the development of new methods.
Recently, artificial intelligence, propelled by the rapid advancements in machine learning and deep neural network technology, has witnessed significant progress. In the context of causal emergence, there are two key aspects to consider. Firstly, machine learning and neural network technology can be employed to address the challenge of identifying causal emergence. By leveraging these tools, we can develop approaches to effectively detect and analyze causal emergence phenomena. Secondly, the concepts and techniques from causal emergence can be introduced into machine learning to enhance the generalization capabilities of models. This integration can potentially improve the ability of machine learning algorithms to generalize well beyond the training data, leading to more robust and adaptable systems.
In a recent study by Zhang et al. [31], a machine learning framework named the Neural Information Squeezer (NIS) was introduced to address the challenge of identifying causal emergence using Hoel et al.’s framework. Remarkably, the NIS neural network, functioning as a “machine observer” equipped with an internal model, exhibits a remarkable ability to identify causal emergence across various types of data. In the latest updated version of this work, the Neural Information Squeezer Plus (NIS+) has been developed to directly maximize the critical measure of causal emergence theory, namely effective information (EI) [32]. Through extensive experiments conducted on both simulated data and real brain data, the NIS+ has demonstrated its ability to automatically find emergent macro-variables and macro-dynamics. Consequently, the NIS+ enables quantifying causal emergence in data with the learned macro-dynamics. The results of these experiments highlight the effectiveness and potential of the NIS+ in capturing and analyzing causal emergence phenomena.
Furthermore, the NIS+ showcases superior performance in terms of generalization ability by directly maximizing effective information (EI). This brings forth a second question: can we leverage the measure of causation, EI, in the context of causal emergence, to enhance the generalization capability of neural networks for out-of-distribution scenarios? This concept is referred to as causal emergence for machine learning. By exploring this idea, we aim to bridge the gap between causal emergence and machine learning, potentially unlocking new avenues for improving the generalization abilities of machine learning.
Finally, in Section 5, we address several important and related issues. Firstly, we explore the similarities and differences between two emerging fields: causal emergence and causal representation learning [33]. This comparison sheds light on the interplay between these two domains. Secondly, we delve into a philosophical problem concerning ontological or epistemological causality and emergence, providing insights into the underlying philosophical implications. Lastly, we discuss the potential applications of causal emergence in complex systems and how it contributes to our understanding of complex systems from a causal emergence perspective. These discussions broaden the scope of this paper and offer intriguing avenues for future research.
This paper aims to provide a comprehensive review of the latest research on the quantitative theory and applications of causal emergence and related works. It also explores the connections between causal emergence, machine learning, and complex systems. The subsequent section delves into the background of causal emergence, with a particular focus on the interplay between causation and emergence in complex systems. In Section 3, various quantitative theoretical frameworks are introduced, including Crutchfield et al.’s computational mechanics [22], Seth et al.’s Granger causal emergence [23], Hoel et al.’s causal emergence theory, and Rosas et al.’s theory of emergence based on information decomposition. Additionally, related concepts such as the coarse-graining strategy, measures of effective information, and partial information decomposition are discussed, and a comparative analysis of these theories is presented. Section 4 addresses the connection between causal emergence theory and machine learning. It explores the use of machine learning and neural network techniques for identifying causal emergence and extends the measure of effective information (EI) to machine learning problems. Finally, this paper delves into other important topics and potential applications in the fields of machine learning and complex systems.

2. Background: From Causality to Emergence

2.1. Causality

Causality, which is a fundamental concept in many fields, including philosophy, natural science, and social science, refers to the relationship between a pair of events, where the first event (cause) can influence the second one (effect) [14,15,16]. The relationship between cause and effect can exhibit either deterministic or probabilistic characteristics. In deterministic causality, the cause will always produce the same effect, whereas in probabilistic causality, the cause will only produce the effect with a certain probability.
In most physical systems governed by differential equations or Markovian dynamics, whether deterministic or probabilistic, causality is inherent. This arises from the fact that manipulating a variable within the system can result in observable changes in other variables, albeit with probabilistic outcomes. As pointed out by Y. Iwasaki and H. A. Simon [34], one physical mechanism may correspond to several possible causal structures because causal graphs describe local causal interactions, whereas mechanisms are the global constraints for all variables. We further discuss the relationship between Markovian dynamics and causal models in detail below.
If we do not know the physical mechanism of a system, how can we model the causality behind the system? There are several parallel theoretical frameworks that can be used to do this. For example, Judea Pearl utilized probabilistic graphic models (e.g., Bayesian network, causal graph, and structural causal model) to characterize causal interactions [15,16]. In these models, nodes represent random variables and acyclic links represent causal interactions. Pearl distinguished and quantified three levels of causality in different models. The first level is association, i.e., the correlation between variables, which can be modeled using Bayesian networks [35]. The second level is intervention, which can be characterized using causal graph models [16]. The difference between a causal graph and a Bayesian network is that the former enables defining a special operator, called a “do” operator, to simulate the interventions performed by the experimenter. The last level is counterfactual, which can be quantified using structural causal models [15]. The distinguishing characteristic of a structural causal model is that all interactions within a causal graph can be described by a set of deterministic functions that involve the input variables and unknown components, thereby introducing uncertainty. Counterfactual causal inferences, that is, inferences under an imaginary scenario, can be conducted only on structural causal models. The different types of causal models and the corresponding causal hierarchy in Pearl’s theory are shown in Figure 1.
Another quantitative framework for causal inference is Donald B Rubin’s “potential outcome” theory, which operates at the counterfactual level [36]. According to this theory, each individual has two potential outcomes: the outcome that would be observed if the individual receives the treatment, and the outcome that would be observed if the individual does not receive the treatment. Rubin’s theory suggests that the causal effect of a treatment can be estimated by comparing the difference between the observed outcome and the unobserved potential outcome for each individual.
Researchers have also developed a bundle of methods, named causal discovery algorithms [37], to automatically discover the causal relations between variables merely from data. For example, constraint-based algorithms [38] and score-based algorithms [39] for causal discovery are two typical representations. Causal discovery has emerged as an interdisciplinary area at the intersection of machine learning, causal inference, and statistics.
Due to the uncertainty and ambiguity behind the discovered causal relations, measuring the degree of causal effect between two variables is another important problem. Numerous independent historical studies have addressed the issue of causation measurement. These include Hume’s concept of constant conjunction [40], Eells’ and Suppes’ measures of causation as probability raising [41,42], and Judea Pearl’s measures of causation [16]. Effective information (EI) in the framework for causal emergence is also an indicator for measuring causal effect, and we explain this point in Section 3.5.3. All these measures can be roughly decomposed into two causal primitives, which characterize the necessary causation and sufficient causation, respectively [43].
It is crucial to differentiate between Markovian dynamics or dynamical mechanism models and the causal models introduced above. In current causal emergence theories [19,24], Markovian dynamics, rather than causal models, including causal graphs and structural causal models, play a central role. There are several distinctions between Markovian dynamics and these causal models.
Firstly, in Markovian dynamics, the temporal evolution of one or multiple variables is typically described, with causal relationships between these variables extending across different points in time. Consequently, all variables are dependent on time, and a variable at one time step can causally impact another variable at the subsequent time step. Conversely, the aforementioned causal models do not explicitly incorporate the element of time. In these models, neither the variables nor the causal relationships are explicitly defined as functions of time.
Secondly, it is worth noting that circular interaction structures, including self-looped interactions, may exist in Markovian dynamics. However, such circular structures are typically not permitted in the causal models we introduced. This distinction arises from the fact that both causal graph models and structural causal models are constructed based on directed acyclic graphs (DAGs) [16]. However, this issue can be resolved by converting Markovian dynamics into DAGs. The initial step involves introducing static causal variables at each time step. Consequently, the same variable in Markovian dynamics can be transformed into a set of different variables at different time steps. This expansion allows for causal relationships across time steps while eliminating circular structures in causal graphs [44].
Given the limitations of conventional causal models based on directed acyclic graphs (DAGs) [45], there is a need for new representation methods. For instance, Richardson [46] expanded the causal modeling approach to include DAGs. Spirtes [47] discussed DAG representations of feedback models and applied them to economic processes. Lacerda et al. also developed a method utilizing independent components analysis to discover cyclic causal models [48]. Furthermore, Forré and Mooij introduced a novel causal discovery method for nonlinear structural causal models with cycles and latent confounders [49]. George T.H. Ellison proposed a methodology that leverages domain knowledge to extract DAGs from temporal relationships among multiple variables [50]. Causal discovery methods can also be extended to temporal data by segregating various temporal data analysis tasks, such as classification, clustering, and prediction. Recently, Gong et al. provided a comprehensive review of these techniques in [51]. These advancements have the potential to broaden the range of available causal models.
In summary, causality is a fascinating interdisciplinary topic spanning several subjects. Numerous quantitative frameworks and measures have been developed in recent decades.

2.2. Emergence

Emergence, known as “the whole is greater than the sum of parts” [52], is a central concept in many philosophical and theoretical discussions about the nature of complexity and the relationship between micro- and macro-levels of organization. Therefore, the scale of a system (micro or macro) and the cross-level interactions must be considered when we discuss emergence. This may increase the complexity of the study of emergence [53,54,55,56,57,58,59].
Although many examples of emergence have been pointed out in various fields [4,59], including the herding behaviors of birds [60], collective behaviors of simple computer programs [61], and emerging ability of large language models [7], a unifying understanding of this phenomenon does not exist right now. People try to understand emergence from different aspects, including self-organization [62], order out of disorder [63], and causality. In this review, we mainly focus on the last point of view.
Emergence has deep connections with causation [10,18,64,65]. First, emergence emphasizes the new properties arising from the interactions between the components in the system, rather than the components themselves. Consequently, all emergent properties and processes at the macro-level can, in principle, be comprehended as the causal effects resulting from interactions between individuals at the micro-level. This is what the philosophical notion of “supervenience” tries to describe.
However, this does not mean that all emergent phenomena at the macro-level can be easily attributed to micro-level individuals and their interactions. Two major reasons can account for this. The first reason is due to the principle of computational irreducibility proposed by Wolfram [66], which claims that although we know emergent properties arise from a set of simple rules, there is no shortcut, in principle, to predict the results other than implementing the rules. For example, the complex behaviors (e.g., herding and aligning) of the Boid model are the results of simple rules, but these behaviors cannot be reduced to individual behaviors simply without simulating them [60].
The second reason is that emergent behaviors may be determined by other emergent properties at the macro-level [19,24]. For example, although the price of rice in the market is determined by all the interactions (bargaining) between buyers and sellers, it is also the causal effect of the price of fossil oil because severe shortages of fossil oil can cause inflation. Therefore, the cause of one emergent property (rice price) is another emergent macro-level coarse-grained variable. This means new causation can be observed between macro-level coarse-grained variables, as pointed out by the theory of causal emergence. Interestingly, the causation at different levels must be consistent to explain the micro-level event of fluctuating rice prices. Otherwise, a double causation [67] fallacy may appear. This consistency between causality at different scales is called the causal equivalence principle in Yurchenko’s latest article [68].
In order to better understand emergence, Bedau et al. [10,65] classified emergence into three categories according to the causal interactions between the micro- and macro-levels: nominal emergence [69,70], weak emergence [10,71], and strong emergence [17,72].
Of these categories, the notion of nominal emergence is the least controversial. It can be described as a kind of property that can be possessed by macro-level patterns or processes but not by their micro-level components [69,70]. For example, pixel patterns on a screen are nominal emergent properties. We can consider such nominal emergent patterns as “supervenient” because all macro-level properties derive from individuals.
Weak emergence refers to macro-level properties or processes that derive through interactions between individual components in a complicated way such that they cannot be easily reduced to micro-level properties due to the principle of computational irreducibility [10,71]. For weak emergent patterns or processes, the causes may come from both the micro- and macro-levels; therefore, emergent causation may coexist with micro-level causation.
In the discourse surrounding emergence, weak emergence is generally more widely accepted than strong emergence. However, there are some concepts and descriptions that remain somewhat ambiguous [73]. For instance, in Bedeau’s original definition, a property is considered weakly emergent if and only if it can be derived from micro-dynamics through simulation. Nevertheless, the term “simulation” itself can be interpreted in different ways—whether it refers to digital simulations or analog simulations is not explicitly implied. Bedeau further elaborated on this concept by stating that if a macro-level property can, in principle, be simulated without requiring an actual simulation, then it can be considered weakly emergent [10]. However, even this explanation is not entirely clear-cut.
There are more debates on strong emergence, which refers to properties or processes at the macro-level that cannot, in principle, be reduced to micro-level properties, including the interactions of individuals [17,72]. Thus, Boid’s collective behaviors are not strongly emergent because they are the result of interactions within the Boid model. This notion is controversial because it rejects any mechanistic explanations for strong emergent properties, which cannot be explained by micro-level variables. Furthermore, it raises some long-standing philosophical debates about “causal fundamentalism” [74] and “supervenience” [75], and the existence of strong emergence remains an open problem due to the scarcity of concrete examples.
Jochen Fromm further explained strong emergence as the causal effect of downward causation [18]. Consider a system that contains three different scales: micro, meso, and macro. Downward causation refers to the causal power from the macro-level to the meso-level or from the meso-level to the micro-level. Consequently, although the strong emergent properties or processes at the meso-level are supervenient to micro-level properties and interactions, their causes derive from the macro-level and, therefore, second-order patterns or processes. However, there are many debates on the notion of downward causation itself, e.g., [64,68].
In summary, there are three types of causation based on their cross-level properties, which are associated with emergence, as depicted in Figure 2:
  • Upward causation: This involves the supervenience relation, where macro-level effects can be attributed to micro-level variables.
  • Intra-level causation: This refers to causal effects occurring within the same scale or level.
  • Downward causation: Here, macro-level properties influence micro-level properties.
In Figure 2, the black solid lines represent causation without controversy. However, according to a recent paper by Yurchenko [68], the causation represented by the dashed and dotted lines (except the intra-level ones in the meso-level) may not actually be causality but rather reasons or explanations. Yurchenko introduced the term “reason” to distinguish it from the conventional understanding of causality, as cross-level causal effects may not possess causal powers. Instead, they serve as reasons or explanations from the perspective of an observer. Yurchenko accepted intra-level causation at the meso- or macro-level as “real”, similar to causation at the micro-level. He even coined the term “causal equivalent principle” to represent the idea that all intra-level causations, except those that are cross-level, should be accepted and are equivalent to each other.
However, other scholars think that compared to the causal relationship at the micro-level, the intra-level causation at the meso- or macro-level is problematic because the coarse-graining strategy is relevant [76,77,78]. As demonstrated in the numeric example in Section 3.5.4, the causation measured by EI is dependent on the coarse-graining method. Hoel et al. tried to avoid this problem by introducing EI maximization. Indeed, EI maximization serves as an objective standard for selecting coarse-graining and macro-dynamics; however, a unique solution is required. We discuss this problem further in Section 5.2.

3. Quantifying Emergence by Causality

Since causality and emergence have a strong connection and causality has multiple quantitative frameworks and measures, it becomes natural to use causality to quantify emergence. In this section, we review several frameworks used to quantify multi-scale causality and emergence.

3.1. Early Related Works

Before the theory of causal emergence was proposed by Hoel et al., some works introduced very similar ideas to causal emergence theory. For example, Crutchfield et al.’s computational mechanics theory considered causal states, which are the partitions of the state space and may have good predictions. Furthermore, Seth et al. proposed G-causality to quantify emergence using Granger’s causality. We discuss them in detail below.

3.2. Computational Mechanics

The theory of computational mechanics proposed by Crutchfield, Shalizi, and Feldman et al. tried to formulate this kind of emergent causation in a quantitative framework [22]. In some sense, computational mechanics can be understood as the inverse of statistical mechanics. This is because statistical mechanics derives macro-level consequences from micro-dynamics, whereas the inverse process is performed by computational mechanics, which constructs a minimal causal model from the observations of a stochastic process that can generate the observed time series.
Let us assume that the stochastic process under consideration can be represented as s . We can divide it into two segments beginning from time step t: the history before t, denoted as s t , and the future after t, denoted as s t . If the process is stationary, we can remove the denotation of t. Thus, all possible histories s form a set, denoted as S , and all futures s form a set, denoted as S .
We aim to establish a model that can reconstruct and predict observed random sequences, with a higher accuracy being desirable. However, the randomness of the sequences prevents us from achieving a perfect reconstruction unless we record every randomly occurring character. This would make the model excessively long. To preserve useful information as concisely as possible, we need a coarse-grained mapping that captures the ordered structure in the random sequences, known as patterns [63]. We can partition S into mutually exclusive and jointly comprehensive subsets that form a set R . Any subset R R is called a “state”. We define a function from histories to states as η : S R . Thus, η is a method that can partition the history into mutually exclusive and jointly comprehensive subsets.
For a set of states R , we can measure its simplicity using a complexity metric. Intuitively, the larger the cardinality of R , the more complex it is. Additionally, we need to consider its distribution. For example, if one state occurs frequently while others occur rarely, it is less complex compared to a situation with a uniform distribution. Therefore, we can define the statistical complexity C μ of a set of states using Shannon entropy [79]:
C μ ( R ) ρ R P ( R = ρ ) log 2 P ( R = ρ ) .
When constructing predictive models using a set of states, the statistical complexity refers to the size of the model.
What kind of state set can achieve the best balance between predictiveness and parsimony? We can introduce an equivalent relationship called causal equivalence [63]. Concretely, we say s and s are causally equivalent if and only if:
P ( s | s ) = P ( s | s )
This equivalent relationship can partition all the histories into equivalent classes, and they are defined as causal states. We denote all the causal states of a history s as ϵ ( s ) , where ϵ : S 2 S is a function that can map a history s to the causal state ϵ ( s ) 2 S , which is a subset of the histories.
For any two causal states S i and S j , we can define causal transitions as a set of labeled probabilities T i j ( s ) that represent the transition from the causal state S i to the causal state S j while emitting the symbol s A [63]:
T i j ( s ) P ( S = S j , S L = 1 = s | S = S i ) ,
where S is the current causal state, S is its successor, and S L = 1 = s denotes all the sequences of length L = 1 , in which the first symbol emitted is s (we use slightly different symbols from those in [79]). Hence, by combining: S = S j and S L = 1 = s , we ensure that all subsequent causal states will have the identical initial emitted symbol s. We denote the set { T i j ( s ) : s A } by T. The definition of causal transitions leads to a direct conclusion: T i j ( s ) = P ( s s S j | s S i ) = P ( s S i , s s S j ) P ( s S i ) , where s s is read as the semi-infinite sequence obtained by concatenating the history s and the symbol s [79].
The ϵ -machine of a process is defined as the ordered pair ϵ , T , where ϵ is the causal state function (which can map a state s to the partition ϵ ( s ) ), and T is the set of the transition matrices (the dynamics) for the states defined by ϵ .
Up until now, we have defined ϵ -machine, which is a pattern discovery machine where the patterns are unraveled from the set of histories. The remaining task is to show that it is, in some sense, optimal. It has the important properties of being minimally predictive, maximally statistically complex, and minimally stochastic [79].
We compare the causal state S to any state R 2 S to show that for the purpose of predicting the future, the causal states do a better job, i.e., they provide more information. This property can be formulated as a mathematical theorem, called the maximal predictability theorem. It is stated as follows: if S is the causal state given by ϵ , then for any other state R 2 S and all L Z + , we have:
H [ S L | R ] H [ S L | S ] .
where H ( · | · ) is the conditional entropy, and S L is the L-length sequences for future. Thus, the inequality shows that the uncertainty of S L given the causal state S is less than all other states.
After achieving optimal predictiveness, the causal state set remains the one with minimal statistical complexity. We first introduce the notion of prescient rivals denoted as R ^ , which are the states that are as predictive as the causal states; viz., for all L Z + , H [ S L | R ^ ] = H [ S L | S ] .
Next, we present the minimum statistical complexity theorem: for all prescient rivals R ^ ,
C μ ( R ^ ) C μ ( S )
Next, we show that the causal states are minimally stochastic. That is to say, compared with other competitors with the same ability to predict the future, the causal states and their transition dynamics have the least uncertainty. Then, we have the minimal stochasticity theorem, which is expressed as follows: for all prescient rivals R ^ ,
H [ R ^ | R ^ ] H [ S | S ] .
where S and R ^ are the next causal state of the process and the next state, respectively. This means that the causal state and ϵ -machine provide the best intrinsic determinism.
Since the causal state set is considered the best, how can we compute the causal states and ϵ -machine from the observed data? The authors of [63] introduced a hierarchical machine reconstruction algorithm; however, the details are not reiterated here.
Although this algorithm may not be applicable to all operational scenarios, the authors presented numerical computational results and corresponding machine reconstruction pathways for chaotic dynamics, hidden Markov models, and cellular automata as examples [63,79].
It is interesting to compare the theory of computational mechanics with causal emergence. Indeed, we can understand that all the histories s are micro-level states, and all the states R R are macro-states. Thus the function η that can map a history s to a state R is a possible coarse-graining strategy.
It is worth pointing out that the causal state ϵ ( s ) is the special state that can have at least the same predictive power as the micro-state s , i.e., the full history. Therefore, ϵ is similar to the notion in the effective coarse-graining strategy in [31] (see Section 3.6.5), and the causal transitions T represent the corresponding effective macro-level dynamics. The feature of minimal stochasticity characterizes the deterministic property of the macro-dynamic. This property is characterized by effective information (EI) in causal emergence theory.
Although a clear definition and quantitative theory of emergence were not provided, the authors discussed the relationship between computational mechanics and emergence [63,80]. In [63], the authors explained that emergence can be conceptualized as a dynamic process in which a pattern acquires the ability to predictably adapt to different environments, as observed by an external observer. Additionally, they differentiated intrinsic emergence from emergence itself, as intrinsic emergence goes beyond the mere production of patterns and encompasses the formation of an embedded observer within the system through these patterns.

3.3. G-Emergence Theory

G-emergence theory, proposed by Seth in 2008 [23], is one of the earliest works on a quantitative measure of emergence. His basic idea is to use nonlinear Granger causality to measure weak emergence in complex systems.
Granger causality (G-causality) is formally defined as follows: Given two time series A and B, if the past values of B can help predict the future values of A, beyond what can be forecasted using the past values of A, then B is said to be “Granger-cause” of A. This implies the existence of Granger causality between A and B.
When applying a bivariate autoregressive model for predictions, residual terms are included in the equations of the two variables. Then, the residuals can be utilized to quantify the extent of the causal effect in G-causality. The degree of B being the G-cause of A is quantified by the logarithmic of the ratio of the two variances of residuals. One is the residual of the autoregression model of A if all the terms of B are omitted, and the other is the residual of the full prediction model, as shown in Figure 3. In addition, in [23], the author also defined G-autonomous as the ability of past values in a time series to predict its own future values. And the degree of G-autonomous can be measured in a similar way as G-causality.
With these two basic notions in G-causality, as shown in Figure 4, the author defined G-emergence on macro-variables as follows: a macro-variable M is G-emergent from a set of micro-variables { m } if and only if (i) M is G-autonomous with respect to { m } , and (ii) { m } is G-cause of M. The degree of G-emergence can also be quantitatively measured by multiplying the degree of G-autonomous of M and the average of G-causes of { m } on M.
The author tested the G-emergence theory on the Boid model. This model is a famous artificial life model that simulates the flocking behavior of birds using three simple rules: cohesion, alignment, and separation [60]. These basic rules are realized by the virtual forces exerted on each bird, and the strength of each force can be controlled. The author found that by increasing the strength of cohesion, the G-emergence of the entire flock also increased. Here, the macro-variable is selected as the center of the flock (center of mass), and each bird is a micro-variable. The author also discovered a downward causality phenomenon in this simple model: the center of mass can be used to predict each individual bird. However, the author did not distinguish downward causation from other common causality in their work because Granger’s causality is not a real causal relationship.
Seth’s G-emergence theory was the first attempt to quantify the emergence phenomenon via a causality measure. However, the causality measure that the author used was Granger causality, which is not a strict causal measure, and it also depends on the regression method to be used. Furthermore, the measure is defined on variables but not dynamics, which means the result depends on the selection of variables.

3.4. Other Quantitative Theories of Emergence

There are alternative means of measuring emergence that do not require relying on causality. Two different methods have been discussed by the scholars. One is to understand emergence as a process from disorder to order, and the other is to understand emergence from the perspective that “the whole is greater than the sum of parts”.
For example, Moez Mnif and Christian Müller-Schloer used Shannon entropy [81] to measure order and disorder. In a self-organized process, emergence occurs when there is an increase in order. This increase can be quantified by measuring the Shannon entropy difference between the initial state and the final state, denoted as H s t a r t H e n d . However, this definition has two limitations. Firstly, the measurement of entropy, denoted as H, is dependent on the abstract level of observation. Therefore, it is necessary to account for the entropy increase resulting from a change in the observer’s abstract level. Secondly, the choice of the initial condition of the system is arbitrary. To address this limitation, one approach is to measure the relative level of Shannon entropy compared to the maximum entropy distribution. Finally, emergence can be quantified by:
E m e r g e n c e = H m a x H Δ H v i e w ,
where H m a x is the maximized entropy of the system. If no prior information is available, the maximum entropy, denoted as H m a x , corresponds to the Shannon entropy of an equal probability distribution. On the other hand, H represents the Shannon entropy of the system at the final moment of a self-organization process. Additionally, Δ H v i e w represents the entropy increase during this process resulting from a change in the observer’s abstraction level. It should be noted that if the observer does not alter their abstraction level, this term would be zero. We can also normalize this quantity by dividing H m a x such that different features can be compared by the normalized quality. For a multivariate system, the authors suggested employing a radar plot to visualize the emergence fingerprint of the dynamical process, which showcases the normalized emergence measure across various variables. Then, the authors applied this method to a simulated system of chickens. M. Tang and X. Mao applied this indicator to artificial society models [82].
Inspired by Moez Mnif and Christian Müller-Schloer’s work, ref. [83] suggested using the divergence measure between two probability distributions to better quantify emergence. They understood emergence as being an unexpected or unpredictable change of the distribution underlying the observed samples. However, this method suffers from computational complexity and estimation accuracy. To address these problems, ref. [84] further proposed an approximating method using the Gaussian mixture model to estimate the density and introduced Mahalanobis distance to characterize the divergence between data and Gaussian components, leading to better results. In [85], the authors systematically compared the three aforementioned methods and applied them to a simple test example. Another Shannon entropy-based emergence measure was proposed by Holzer and de Meer et al. [86,87]. They considered a complex system as a self-organization process in which different individuals interact with communications. Emergence can then be measured based on a ratio between the Shannon entropy measure on all communications between agents and the total summation of the Shannon entropy for each communication as a separate source.
Unlike the aforementioned methods, refs. [88,89] proposed a method to quantify emergence based on the idea that “the whole is greater than the sum of its parts”, defining emergence from the interaction rules and states of agents instead of the overall statistical measure of the whole system. Specifically, this measure consists of two terms that are subtracted from each other. The first term characterizes the collective states of the entire system, whereas the second term represents the summation of the individual states of all its constituent parts. This measure emphasizes the emergence that arises from the interactions and collective behavior of the system. This method was then tested on the example of bird flock simulation.

3.5. Erik Hoel’s Causal Emergence Theory

3.5.1. Basic Idea

The first quantitative emergence theory based on Markov dynamics and causality measures through intervention was Erik Hoel’s causal emergence theory.
In this framework, system properties can be characterized at various levels, ranging from micro to macro. If a system exhibits stronger causality at the macro-level than at the micro-level, it demonstrates causal emergence. Causality is reflected between successive states during the system’s evolution. The strength of a system’s causality reveals the extent to which its future state is influenced by its current state. The basic idea of causal emergence is illustrated in Figure 5.
For example, statistical mechanics is a typical theory of causal emergence. At the micro-level, a huge number of molecules collide and exhibit random behaviors such that probabilistic language must be used to describe them. However, if we coarse-grain the whole system into several thermodynamic physical variables like pressure, temperature, etc., we can use very concise and exact thermodynamic equations to describe their behaviors. Therefore, thermodynamic laws have a stronger causal effect than micro-level molecular dynamics.
Formal tools are used to describe the elements in Hoel’s causal emergence theory. Typically, it employs discrete Markov models to describe the micro-dynamics of systems, and the corresponding macro-level systems with different Markov dynamics can be derived by coarse-graining the micro-systems. Additionally, the inherent strength of the causal effect of the Markov model can be measured with effective information (EI), indicating how effectively a particular state influences the future state of a system.
EI is an intrinsic property of a system’s dynamics and can be quantified by the transitional probability matrix (TPM). A coarse-graining strategy is a function that maps a set of micro-states into a particular macro-state, allowing for the derivation of a new dynamical model described by TPM from the micro-level TPM. The effective information of the coarse-grained model can also be computed. The phenomenon of causal emergence implies that as we coarse-grain microscopic states, the amount of effective information transmitted from the current state to the next state can possibly increase. At a certain macroscopic scale of coarse-graining, the effective information reaches a maximum; this scale represents the point at which the system state has the maximum causal power to predict future states in the most reliable and effective way.
To illustrate causal emergence, let us look at a particular Markov chain model with four possible states, whose transition probability matrix (TPM) is:
S m = 1 / 3 1 / 3 1 / 3 0 1 / 3 1 / 3 1 / 3 0 1 / 3 1 / 3 1 / 3 0 0 0 0 1
In this model, if we take the first three states as group 1 and the last state as group 2, anyone in group 1 can transition to any of the three states in the same group with equal probability at the next moment. However, because there is only one state in group 2, so the fourth state will always stay in its position. Intuitively, we can conclude that the future states are not fully determined by the current states, and uncertainty mainly arises from group 1.
However, if we merge the first three states in group 1 into one new state and keep the fourth state in group 2 as it is, we obtain a coarse-grained Markov model with two states, represented by the following TPM:
S M = 1 0 0 1
Now, the future states of the new system can be fully determined by the current states. This shows that we can eliminate the uncertainty of a nondeterministic system by performing coarse-graining over the system states.

3.5.2. About Coarse-Graining

The example above illustrates how a system’s determinism can increase as it is coarse-grained. Coarse-graining is a process that simplifies the description of a system by grouping its components into larger, more slowly varying units. It is often used to identify the essential features of a system that determine its macroscopic behavior, without being burdened by the details of micro-scale interactions. Unlike dimension reduction techniques, such as PCA and SVD, coarse-graining takes into account the system’s features at different spatial and temporal scales. Coarse-graining also differs from renormalization. In physics, the renormalization group method was invented to eliminate infinities in integrals. It is also used to coarse-grain a system such that the Hamiltonian or partition functions are similar between micro- and macro-levels [90,91]. However, the coarse-graining method does not have this requirement in general. Although both techniques are designed to describe a system from a coarse-grained level, coarse-graining focuses more on the states of the system, whereas renormalization cares about dynamics, system rules, partition functions, etc. Renormalization is frequently used in physics and the study of phase transitions to explore critical phenomena and symmetry breaking. These are aspects that coarse-graining typically does not consider.
Given a transitional probability matrix (TPM), there are many ways to partition the state space for coarse-graining the system. What kind of coarse-graining strategies are more reasonable? In [19,25], the authors suggested selecting an appropriate coarse-graining method by maximizing the effective information of the coarse TPM. In the literature, there is another criterion for selecting a coarse-graining strategy called lumpability, where the lumpability of a TPM refers to the coarse-grained TPM exhibiting similar dynamics and observation statistics [92]. Commutativity is another requirement that requires the operations of coarse-graining and state transitions to be commutative [93]. There are also other discussions or reasonable model reductions of Markov or hidden Markov models [94,95,96].

3.5.3. About Effective Information

In order to quantify emergence, a widely used method is to calculate the mutual information between the past states and future states of a system. This mutual information defines the upper limit of the information being transferred from the past to the future. If the mutual information is high, it suggests that a significant amount of information about the past is retained in the future.
I ( X t ; X t + 1 ) = P ( X t , X t + 1 ) log P ( X t , X t + 1 ) P ( X t ) P ( X t + 1 )
One limitation of utilizing mutual information is that the resulting value may fluctuate in response to changes in the joint probability of data. This can make it difficult to obtain a consistent outcome if the system is exposed to different inputs.

Effective Information (EI)

To address this limitation, Hoel introduced the concept of effective information to measure the causal effect of the current state on the future state of a system. Effective information is a scoring metric based on mutual information and can be calculated from the system’s transition probability matrix (TPM), which is invariant to the input data.
By “intervening” in the current state distribution to follow the uniform distribution (the distribution of maximum entropy), denoted as I D , the TPM enables the prediction of the future state distribution E D at the next moment. The effective information ( E I ) of the system is defined as the mutual information between I D and E D , which can be expressed as
E I I ( I D ; E D ) = I ( X t ; X t + 1 ) | d o ( X t ) U
The probability distribution of the random variable X t , representing the initial state, is denoted as d o ( X t ) U , indicating that it follows a uniform distribution. Hoel adopted the notion of the “do” operator from Judea Pearl’s causal analysis framework [25]. It is worth noting that in Pearl’s context, the do operator is typically used to assign a specific value to the intervened variable rather than applying a distribution [16].
Another point that needs clarification is that the “do” operator used here is purely a mathematical construct that specifies the distribution of X t . It does not imply the need for actual intervention in the system. However, the imaginary “do” operation is equivalent to a real intervention in the context of this scenario, given that the dynamical mechanism (TPM) is provided. Consequently, we can perform any desired intervention on the system, just as in computer simulations.
The second issue pertains to why we apply the “do” operator to a uniform distribution. In [19], Hoel et al. claimed that the distribution should be the maximum entropy distribution, which is the most reasonable selection for the input variable X t if we have no prior information about the input variable [97]. As we know, uniform distribution can be derived by maximizing entropy if there is no constraint.
The authors of this review believe that applying the “do” operator to a uniform distribution ensures that the objective measured by EI solely reflects the dynamical mechanism itself, i.e., the TPM, and is independent of any input data.
This point can be clearer if we re-express EI as a function of the TPM of the system (see Appendix A for the detailed derivation):
E I = 1 N i , j T P M ( i , j ) log 2 N × T P M ( i , j ) k T P M ( k , j ) ,
where T P M ( i , j ) represents the transitional probability of the system from the state i to the state j.
Therefore, effective information offers a solution to the aforementioned limitation of mutual information. Given that the TPM can capture the inherent nature of a system, EI is likewise an intrinsic property of the system.
However, if we apply the “do” operator to the system using different distributions, EI will depend on the chosen distribution, and certain transition probabilities of specific rows may carry greater weight in the average calculation. Consequently, deriving a simple expression solely based on the transitional probability matrix (TPM) may not be possible. Furthermore, if there is no “do” operator, the effects of the dynamical mechanism and the input distribution in EI would not be distinguishable or separated, as the mutual information is a function of the input distribution p ( X t ) .

The Derived Measures of EI

Furthermore, we can calculate the EIs for the TPMs of both macro- and micro-dynamics, and their difference is defined as the measure of causal emergence, that is,
C E = E I ( T P M M ) E I ( T P M m ) ,
where T P M M represents the TPM of macro-level dynamics, and T P M m represents the TPM of micro-level dynamics. C E measures the degree of causal emergence. If C E > 0 , then causal emergence occurs; otherwise, it does not occur.
However, there is a limitation for EI and CE when comparing two dynamics that differ significantly in size. This is because the value of effective information relies on the number of possible states within the system, denoted as N, with an upper bound of log 2 ( N ) . To facilitate comparisons between different coarse-graining strategies and scales, effective information is often normalized, resulting in a metric known as the effect coefficient E f f .
E f f = E I log 2 ( N )
The value of the effect coefficient is always between 0 and 1, representing the proportion of effective information being transferred from current states to future states. If the information is fully transferred, the effect coefficient is 1.
In addition to characterizing causal emergence, the effect coefficient can be further broken down into two meaningful components: “ D e t e r m i n i s m ”, which represents the certainty of a current state evolving into a certain state or diverging into multiple states in the next moment, and “ D e g e n e r a c y ”, which represents the possibility of multiple current states converging into one state in the next moment.
E f f = D e t e r m i n i s m D e g e n e r a c y
where both “ D e t e r m i n i s m ” and “ D e g e n e r a c y ” can be defined in terms of the TPM (refer to Equations (8) and (10)):
D e t e r m i n i s m = 1 N log 2 ( N ) i , j T P M ( i , j ) log 2 ( N × T P M ( i , j ) ) D e g e n e r a c y = 1 N log 2 ( N ) i , j T P M ( i , j ) log 2 ( k T P M ( k , j ) )
It should be noted that the determinism of a system is always greater than its degeneracy, as the lower bound of the effect coefficient is 0. The following examples illustrate what determinism and degeneracy look like in systems with varying TPMs.
In Figure 6, the square cells represent the elements of the TPM, and the grayscale areas represent the values of the TPM elements. Example (a) is a bijective system, meaning that all information from current states can be transferred to future states without loss. It is fully deterministic with zero degeneracy. Example (b) is an extreme case where all current states lead to only one future state, illustrating that high determinism does not necessarily imply a high effect coefficient. Systems (c) and (d) differ but have the same effect coefficients. Finally, system (e) is a coarse-grained version of either system (c) or (d), demonstrating two important points: first, different microscopic systems can be coarse-grained to the same macroscopic system, and second, causal emergence can be quantitatively captured from coarse-graining by the increased effect coefficient.
One limitation of EI is its global nature, as highlighted by the fact that the summation of additional terms encompasses the entire state space, as defined in Equation (8). Therefore, ref. [98] proposed the concept of flickering emergence, which decomposes EI into each term added in Equation (8). This new feature can characterize the local properties of Markov dynamics.

Comparison with Other Measures of Causation

Hoel’s theory of causal emergence is based on effective information, however, is the selection of EI necessary for measuring causation? Is causal emergence a phenomenon that depends on the selection of the measure of causation? To address this problem, in Comolatti and Hoel’s work [28], a systematic comparison was conducted between effective information (EI) and other measures of causation that are widely applied in various fields, ranging from philosophy to genetics.
The findings of this comparison revealed that causal emergence is not merely a peculiar phenomenon limited to a specific measure. Instead, it exhibits commonalities and shared characteristics across different measures and fields of study. EI is not the sole measure for capturing causal emergence; there are other measures of causation that can also reveal the phenomenon of causal emergence. This suggests that the concept of causal emergence has broader applicability and relevance. We introduce this work in detail below.
Firstly, Hoel highlighted that causation is not merely a singular relationship between a cause and an effect. Instead, it encompasses two fundamental dimensions known as causal primitives: sufficiency and necessity.
The sufficiency aspect of causation refers to the scenario where the occurrence of the cause guarantees the occurrence of the effect. In other words, whenever the cause happens, the effect is also observed. This sufficiency dimension, denoted as s u f f , can be formally defined as the probability of the effect e occurring given the condition that the cause c has occurred:
s u f f ( e , c ) = P ( e | c ) .
In addition, a necessary ( n e c ) relation in causation refers to the absence of the effect implying the absence of the cause. In other words, when the effect does not occur, it indicates that the cause also does not occur. This can be understood as a kind of causal effect measure for the counterfactual. This necessary dimension, denoted as n e c , can be quantitatively defined as the probability that the effect e does not occur when the condition that the cause c has not occurred is given by:
n e c ( e , c ) = 1 P ( e | C / c ) ,
where C / c represents the set of all possible causes in C but with the particular cause c being excluded. Therefore, P ( e | C / c ) represents the probability that e occurs if other causes except c occur.
With these causal primitives defined, Hoel compared different measures of causation, including Hume’s constant conjunction [40], Cheng’s causal attribution [99], Eells’s measure of causation as probability raising [41], Suppes’s measure of causation as probability raising [42], and Judea Pearl’s measures of causation [16]. The finding was that all of these measures can be expressed as the two causal primitives.
For example, Cheng’s causal attribution can be expressed as:
C S c h e n g = s u f f ( e , c ) + n e c ( e , c ) 1 n e c ( e , c )
With this understanding, a natural question arises: Can effective information (EI) be expressed using causal primitives? The answer is affirmative. However, to clarify this point, two important distinctions need to be made. Firstly, in most measures of causation, the causal variables are binary, whereas EI is defined for variables with multiple values. Secondly, EI is an information-theoretic measure, whereas others are probabilistic measures. Despite these distinctions, EI can still be expressed using causal primitives.
To understand this, let us examine the equivalent measure of EI: normalized effective information ( E f f ) in Equation (11). This measure contains two terms. The first term is d e t e r m i n i s m , and the second one is d e g e n e r a c y . These two terms can be expressed as s u f f and n e c :
d e t e r m i n i s m = 1 c C P ( c ) H ( e | c ) log 2 N ,
where H ( e | c ) e E P ( e | c ) log 2 P ( e | c ) is the Shannon entropy of the conditional probability P ( e | c ) , which is s u f f ( e , c ) , and P ( c ) , c C is the distribution of all causes. In the definition of EI (Equation (7)), this distribution is intervened as a uniform distribution such that equal weights are assigned to causes c C . It is not hard to see that d e t e r m i n i s m is an information metric for s u f f ( e , c ) and is averaged for all causes. Furthermore, as s u f f ( e , c ) = P ( e | c ) increases and approaches one, indicating the emergence of causal effect, the value of H decreases, whereas determinism concurrently increases. Thus, the determinism term in EI plays a similar role to that of s u f f in causal primitives.
Another term is d e g e n e r a c y , which can be re-written as:
d e g e n e r a c y = 1 H ( e | C ) log 2 N ,
where H ( e | C ) e E P ( e | C ) log 2 P ( e | C ) is also the Shannon entropy of the conditional probability P ( e | C ) , which is calculated by averaging the causal effect for all elements in C as P ( e | C ) = c C P ( c ) P ( e | c ) . The entropy H ( e | C ) serves as a measure of the average causal effect for counterfactuals, as P ( e | C ) can be interpreted as P ( e | C / c ) . This is due to the approximation P ( e | C ) P ( e | C / c ) when the number of elements in C is significantly large, and C certainly encompasses C / c . Consequently, d e g e n e r a c y in E I acts as the counterpart to n e c in causal primitives.
Therefore, EI or e f f is a valuable measure of causation, particularly in cases where the cause and effect variables are not limited to binary values.
Comolatti and Hoel [28] conducted additional experiments to investigate the phenomenon of causal emergence using various Markovian dynamics, employing different measures of causation, as discussed above. Their findings revealed the widespread occurrence of causal emergence, regardless of the specific measure of causation employed.

3.5.4. Examples of Causal Emergence

The example above demonstrates a coarse-graining strategy that is intuitive, as it involves aggregation at the state level. However, in reality, coarse-graining often occurs among variables, and the combination of these variables produces different states. Let us look at a more complicated example of a boolean network, given in [19], and inspect the causal emergence when coarse-graining its state transition mechanism. A, B, C, and D are four binary variables whose state transition relations are shown in Figure 7a.
In order to demonstrate how different coarse-graining strategies can impact effective information, we can combine two micro-states into a single macro-state. For example, we can define α = ( A , B ) and β = ( C , D ) as coarse-grained variables at the macro-level, where their values can be either ‘off’ or ‘on’. Using various aggregation strategies, as illustrated in (a)/(b) and (d)/(e) in Figure 8, we can construct the TPMs for the resulting coarse-grained systems. The TPM and the corresponding metrics are presented in Figure 8c,f, respectively.
The effective information values for the two macroscopic systems are E I m a 1 = 1.55 and E I m a 2 = 0.18 , respectively. It is evident that the effective information has increased in the first coarse-graining scenario ( E I m a 1 ( 1.55 ) > E I m i ( 1.15 ) ), whereas it has decreased in the second case ( E I m a 2 ( 0.18 ) < E I m i ( 1.15 ) ). Thus, causal emergence is observed in the first coarse-graining scenario. This implies that (1) proper coarse-graining can lead to causal emergence in a macroscopic system, and (2) not all coarse-graining approaches result in higher effective information. This example illustrates that coarse-graining can play an important role in emergence, and effective information is a well-defined quantification method. This gives us the opportunity to identify causal emergence in a given system. This raises the following question: What coarse-graining strategy can generate the maximum effective information? We address this question in Section 4.

3.5.5. Extensions to Continuous Systems

Effective information and its associated metrics have been shown to be useful in quantifying causal emergence. However, there are some limitations to be aware of. First, the method is only applicable to discrete-state systems. Second, knowledge of the system’s transition probability matrix (TPM) is required to calculate the effective information metrics. Efforts have been made to apply or extend the concepts of effective information to continuous systems [26,31,100].

Ordinal Partition Network

One approach to incorporating effective information in continuous systems is to transform the continuous variables into discrete ones using the ordinal partition network (OPN) method [100]. The basic idea is to transform the time-series data generated by a continuous dynamical system into a sub-time series of dimensionality d with discrete values and then establish a transition probability matrix (TPM) between these discrete series. This approach involves three steps: firstly, sampling the original time series with a specified time interval τ to create sub-time series of dimensionality d; secondly, ranking these sub-series based on their values; and finally, utilizing the d-dimensional vectors of rank orders to represent the original sub-series. By employing rank orders, this method ensures that the values are restricted to integers within a certain range.
In [100], the author applied the OPN method to the Rossler chaotic attractor system. By quantifying effective information, he noticed that effective information (such as determinism and degeneracy) was sensitive to the critical phase transition between period and chaos [100].

Causal Geometry

Alternatively, Pavel Chvykov and Erik Hoel extended the causal emergence framework to continuous systems and proposed the concept of causal geometry [26].
To understand the basic idea of this work, we suppose the continuous system considered is a continuous functional map and assume that the uncertainties are small disturbances added to this deterministic function, that is,
y = f ( x ) + ε ,
where x is the input variable defined on the interval [ L / 2 , L / 2 ] , L is a very large constant, y is the output variable, and ε is a Gaussian noise with a zero mean and standard deviation ϵ L . Then, an approximate form of the effective information for such a continuous system can be obtained using the Gaussian integral.
E I ln ( L 2 π e ) + 1 2 L L / 2 L / 2 ln f ( x ) ϵ 2 d x .
The reason we set the domain of x to [ L / 2 , L / 2 ] is to guarantee that the Gaussian integral can be implemented and to simplify the definition of a uniform distribution on it.
This form can be easily generalized to a dynamical system with discrete time steps once we replace x and y with x ( t ) and x ( t + 1 ) , respectively. The formula can also be extended to higher dimensions, with x and y represented in bold. Suppose x   [ L / 2 , L / 2 ] n R n and y R m , where n and m are positive integers. Equation (19) can be generalized to the following form:
E I ln ( L n ( 2 π e ) m / 2 ) + 1 2 E x U ( [ L 2 , L 2 ] n ) ln det x f ( x ) Σ 1 / 2 2 ,
where Σ is the covariance matrix of the Gaussian noise ε , U ( [ L , L ] n ) represents the uniform distribution on the hypercube [ L , L ] n , | · | is the absolute value operation, and det is the determinant.
Note that the conditional distribution of y given x is a Gaussian distribution, that is, p ( y | x ) = N ( f ( x ) , Σ ) . Thus, the term in the expectation in Equation (20) can be written as:
det x f ( x ) Σ 1 / 2 2 = det E y | x μ ν ln p ( y | x ) ,
and this corresponds to the determinant of the negative Fisher information metric for the distribution p ( y | x ) :
g μ ν E y | x μ ν ln p ( y | x ) ,
which measures the sensitivity of the logarithmic p ( y | x ) to changes in x , where μ x μ represents the partial derivative with respect to the μ th component of x . Therefore, we have defined a distance metric for the Riemann manifold M = { p ( y | x ) } on the parameter space x [ L / 2 , L / 2 ] n , encompassing all possible distributions of p ( y | x ) . This is the origin of the term “geometry” in this framework.
Finally, we can obtain an expression for EI with the Fisher information metric:
E I ln L n ( 2 π e ) m / 2 1 2 E x U ( [ L 2 , L 2 ] n ) ln | det ( g μ ν ) | .
This formula can be generalized to cases where p ( y | x ) is a non-Gaussian distribution. Once the distribution function is known, we can obtain its Fisher information metric, and then EI can be calculated using Equation (23). The reason behind this is that the whole manifold p ( y | x ) for any x can be understood as a concatenation of local Gaussian distributions.
In [26], the authors considered a more general case in which intervention noise is added to the input (intervention) variable x in Equation (19). The intervention noise is denoted as ξ N ( 0 , δ 2 ) , where δ is the standard deviation of ξ . In contrast, ε is called observational noise distinguishing it from ξ , and is added to the output (observational) variable y according to Equation (19). Finally, if two kinds of noise are both considered and if L = 1 and ϵ 1 , Equation (19) becomes:
E I 1 2 1 / 2 1 / 2 ln ϵ f ( x ) 2 + δ 2 d x .
This is the formula for EI for the continuous mapping function (Equation (18)) given by [26]. Using this equation and a logistic function for f ( x ) , the authors compared the EIs of a more continuous map (Equation (18)) and a more discrete map with an adjustable parameter in the logistic function. They discovered that when the noise level is low, the continuous map can exhibit higher EI compared to the discrete map. However, as the noise level increases, discretizing the mapping function can lead to a model with higher EI. This phenomenon helps explain why digital circuits eventually outperform analog circuits in mitigating noise interference; the binarization and coarse-graining strategy of digital circuits suppresses the propagation and diffusion of noise.
To generalize the information geometry to the case with intervention noise and observational noise, let us introduce a new intermediate variable θ Θ R l with dimensionality l such that we cannot control y by directly intervening x . Instead, we can intervene x to influence θ and indirectly affect y . Therefore, the three variables form a Markov chain: x θ y .
In this case, two manifolds can be obtained: the effect manifold M E = { p ( y | θ ) } θ Θ with metric g μ ν = E p ( y | θ ) μ ν ln p ( y | θ ) and the intervention manifold M I = { q ˜ ( x | θ ) } θ Θ with metric h μ ν = E q ˜ ( x | θ ) μ ν ln q ˜ ( x | θ ) , where q ˜ q ( θ | x ) q ( θ | x ) d x and μ = / θ μ . These two manifolds together are called causal geometry.
Finally, the EI calculation formula for causal geometry is:
E I g = ln V I ( 2 π e ) n / 2 1 2 V I Θ | det ( h μ ν ) | ln det I n + h μ ν g μ ν d l θ ,
where we have set L = 1 and m = l = n to reduce the number of free parameters, and I n is the identity matrix with size n, V I = Θ | det ( h μ ν ) | d l θ .

EI Calculation for Neural Networks

One of the application areas for continuous EI is neural networks. In [101], the authors applied EI to analyze the causal effects for different layers of neural networks and compared the results with the information bottleneck theory. The method they used to calculate EI was to convert continuous variables into discrete ones by dividing the domains of the input and output variables into small regions. However, this method has high computational complexity and is challenging to extend to high dimensions.
In [31], the authors proposed a new numeric method for calculating EI on neural networks. The basic idea behind this method is to understand a well-trained feed-forward neural network as a stochastic mapping, following:
y = f ( x ) + ε ,
where x R n is the input variable, y R m is the output variable, and f : R n R m is the deterministic map of the neural network. Additionally, ε is a Gaussian noise with a mean of zero and a covariance matrix Σ = d i a g ( σ 1 2 , σ 2 2 , · · · , σ m 2 ) , where σ i is the mean square error on dimension i across the entire training dataset. In this way, the neural network can be regarded as a Gaussian distribution p ( y | x ) = N ( f ( x ) , Σ ) .
Therefore, Equation (20) can be used on this neural network to calculate EI. In [31], the authors used the Monte Carlo integration method to estimate the expectation in Equation (20). This technique significantly reduces the computational complexity.
However, when the authors compared the results calculated using Equation (20) for various neural networks with different dimensions, they obtained unreasonable results: EI increased with the number of dimensions. Thus, causal emergence was not always observed because micro-dynamics exhibited unreasonably larger EI. Another drawback of Equation (20) is that the parameter L always dominates the value of the expression and should be removed from the equation. One way to solve this issue is to use Eff instead of EI (see Equation (10)). However, not all Ls in Equation (20) can be eliminated. Therefore, the authors introduced the notion of dimension-averaged EI, defined as
d E I E I m
which simply divides EI by the dimension of the output variable y . By using dimension-averaged EI, L can be removed when calculating dimension-averaged causal emergence for the neural networks f M and f m representing macro- and micro-dynamics, respectively, given by
d C E ( f M , f m ) E I ( f M ) n M E I ( f m ) n m ,
where dCE is the dimension-averaged causal emergence measure. If we incorporate Equation (20) into Equation (28), we obtain:
d C E E x M log | det ( x M f M ) | + i = 1 x M log σ i , M 2 n M E x m log | det ( x m f m ) | + i = 1 x m log σ i , m 2 n m ,
in which L is eliminated.

3.5.6. Causal Emergence in Complex Networks

Many complex systems can be represented by networks; therefore, applying the framework for causal emergence to networks is necessary. However, two problems should be solved in advance to apply Hoel’s framework: one is to assign dynamics to the studied network because networks are static structures without any dynamical properties and the other is to coarse-grain the network.
Klein and Hoel [27] addressed the first problem by considering random walks on complex networks. Subsequently, the TPM of the system was defined based on the transfer probabilities between nodes of a large number of random walkers. Due to the good mathematical properties of the random walk dynamics on graphs, the authors established an explicit expression for EI on networks, and the final form only relied on the weighted normalized adjacency matrix w i j ( j w i j = 1 ) of the network. The expression is as follows:
E I = H ( W i o u t ) H ( W i o u t ) .
where H ( W i o u t ) represents the Shannon entropy calculated from the distribution of the averaged out-weights across all nodes ( W i o u t { w i j | j = 1 , 2 , , N } ), which characterizes the determinism of the random walk dynamics; H ( W i o u t ) is the average Shannon entropy for all nodes, which characterizes the degeneracy of the dynamics; and H ( W i o u t ) is the uncertainty of node i.
To address the second problem, in [27], the greedy algorithm was used to group nodes to form a macro-network. It is worth noting that when merging micro-nodes into macro-nodes, the TPM of the resulting macro-network can be derived by merging the probabilities from the TPM of the micro-level. To ensure that the grouped macro-network maintains the same random walk dynamics as the original one, a dynamic consistency test is implemented.
In the experimental section, the aforementioned paper explored the application of a greedy search for identifying causal emergence in complex networks. Experiments were conducted on artificial networks as well as four types of real networks. In the case of ER random networks, the size of effective information was solely dependent on the connection probability p and converged to log 2 p with the increase in network size. Additionally, a significant finding was the presence of a phase transition point, where the average degree of the network reached approximately log 2 N . Beyond this point, the random network structure did not contain additional information with increasing size. In the case of preferential attachment (PA) networks, when α < 1.0 ( α represents the degree of preferential attachment), the effective information of the network increased as the network size expanded. Conversely, when α > 1.0 , the opposite was observed. The scale-free network corresponding to α = 1.0 represented the critical boundary of growth. Regarding real networks, the authors found that biological networks exhibited the lowest EI due to the presence of significant noise, which can be removed through effective coarse-graining, and causal emergence was the most significant compared to other types of networks. However, technical networks exhibited sparsity and non-degeneracy, resulting in higher average efficiency and more specific node relationships. Consequently, they exhibited the highest EIs among the studied networks.
The network coarse-graining method mentioned above is the greedy algorithm. However, when the network is very large, the efficiency of this method remains considerably low. Following that, Griebenow et al. [102] introduced a spectral clustering-based approach for identifying causal emergence within networks. More specifically, the method involved performing eigenvalue decomposition of the TPM, followed by constructing a similarity matrix using the eigenvectors of the nodes. The OPTICS algorithm was employed to cluster the nodes, and nodes belonging to the same cluster were aggregated into a macro-node. Subsequently, the maximum value of EI was selected by utilizing the linear search distance hyperparameter ϵ . In their paper, the authors additionally proposed a gradient descent algorithm based on deep learning. This approach encompassed several steps, beginning with the random initialization of a grouping matrix. Then, this matrix was used to construct a macro-network by combining the micro-networks. Finally, the grouping strategies were automatically learned through the process of maximizing effective information within the macro-network. However, this method often falls into the local optimal solution.

3.5.7. Other Applications

Once the method for quantifying causal emergence in complex systems is developed, it can be applied across various fields that possess abundant network data. The first type of network studied was biological networks.
As previously discussed, biological networks are full of noise, which poses challenges in comprehending their internal operating principles. On one hand, such noise arises from inherent fluctuations within the system itself, whereas on the other hand, it can be introduced through measurement or observation processes. Consequently, Klein et al. [103] further explored the relationship between noise, degeneracy, and certainty in biological networks and their specific meanings. For instance, in gene expression networks, highly deterministic relationships indicate that the expression of one gene almost invariably leads to the expression of another gene. Simultaneously, degeneracy is a prevalent phenomenon in the evolutionary processes of biological systems. Due to these two factors, it remains unclear at which scale biological systems should be analyzed to gain a deeper understanding of their functions.
To address this, Klein et al. [104] conducted an analysis of protein interaction networks across more than 1800 species. They employed EI as a measure for assessing the levels of noise and uncertainty in protein interactions. The findings revealed that the macro-scale network exhibited lower levels of noise and degeneracy. Additionally, the nodes within the macro-scale interaction group demonstrated greater resilience compared to the nodes that were not part of the macro-scale interaction group. Through robust analysis, the authors demonstrated that eukaryotes exhibited a stronger degree of causal emergence compared to archaea. Additionally, to address the ’deterministic paradox,’ the authors introduced the concepts of the neutral process and the selection process in biological evolution. The neutral process operates at the micro-scale, leveraging mutations to promote interactions and enhance species diversity. On the other hand, the selection process operates at the macro-scale, effectively eliminating noise that hampers system operation and efficiency. Therefore, in order to adapt to the demands of evolution, it becomes essential for evolved biological systems to function across multiple scales.
Hoel et al. [43] conducted further research on the causal emergence within biological systems. The authors elaborated that macro- and micro-systems exist widely in biological systems. For example, the micro-scale of a group of cells can involve potential ion channel changes, whereas the macro-scale corresponds to the membrane potential changes of cells. Furthermore, the authors utilized EI to analyze gene regulatory networks, aiming to identify the most informative models for controlling mammalian heart development. By quantifying the causal emergence within the largest component of the gene network of Saccharomyces cerevisia, it was revealed that informative macro-scale structures were prevalent across biological systems. Additionally, the authors emphasized the importance of evolutionary systems operating at multiple scales due to the significant advantages they offer. Natural selection requires variation between populations, and degradation observed in biological systems serves as a crucial factor for evolution. Degradation provides the conditions for the evolutionary process. However, organisms also need to maintain predicted consistency in phenotype, behavior, and structure to ensure survival and reproduction. Consequently, evolved systems need to operate at multiple scales, and the function of multi-scale systems is malleable in changing environments.
Swain et al. [105] conducted an investigation into the impact of ant colony interaction history on task assignment and task switching. They employed effective information to examine how noise information propagates among ants and explored the relationship between EI and the proportion of ants assigned to different tasks within ant colonies. The study revealed that the extent of historical interaction between ant colonies influenced task assignment. Additionally, the specific type of ant colony involved in an interaction determined the level of noise present within that interaction. For example, the interactions between foragers displayed significantly higher levels of noise when contrasted with interactions between nurses or cleaners. Furthermore, even when ants switched functional groups, the cohesion within ant colonies ensured the stability of the overall colony, and different functional ant colonies also played different roles in maintaining group cohesion.
The EI indicator and causal emergence theoretic framework can also be applied to artificial systems. For example, Marrow et al. [101] quantified and monitored the changes in the causal structure of neural networks during training, in which EI was employed to evaluate the degree of causal influence of nodes and edges on downstream tasks at each layer. By observing the changes in EI, including determinism (sensitivity) and degeneracy, throughout model training, the generalization ability of the model could be determined, thus helping better understand and explain the working principle of neural networks.
Varley et al. [100] attempted to apply the causal emergence framework to both discrete cellular automata and continuous Rossler systems. In the case of cellular automata systems, the authors select 88 unique rules corresponding to four types: static, periodic, chaotic, and complex. By considering each state as a node, where each state determines the subsequent state, a directed state transition graph is constructed. The analysis revealed that rules 1, 2, and 4 correspond to the strongest causal emergence of dynamics. Notably, the network constructed by these three rules exhibited a significant presence of star and spoke motifs. In addition, the authors also draw some quantitative conclusions, for example, among the 17 rules belonging to the third and fourth categories, 30% exist causal emergence, 70% show causal degradation, and the CE of cellular automata with the same rules remained relatively consistent across different sizes.
Furthermore, the authors employed the OPN algorithm to transform the continuous system into a discrete-state transition graph for comparative analysis (see Section 3.5.5). They found that chaos dynamics demonstrated a correlation with low determinism, and the variations in the degeneracy and efficiency coefficients aligned with the changes observed in the determinism curve.

3.5.8. Critiques

In the extensive literature on causality and emergence, Hoel’s theory has attracted attention for linking emergence and causality through interventionism, introducing the concept of causal emergence in a quantitative manner. However, Dewhurst [77] provided a philosophical clarification of Hoel’s theory, arguing that it was epistemological rather than metaphysical. This suggests that Hoel’s macroscopic causality is merely a causal explanation based on information theory, rather than involving “genuinely novel causal powers”. This also raises concerns about the assumption of a uniform distribution, as there is no empirical basis to favor it over other distributions.
The computation of Hoel’s effective information relies on two premises: (1) knowledge of the system’s microscopic dynamics, and (2) knowledge of the coarse-graining scheme. However, in practice, it is rare for both conditions to be simultaneously satisfied, especially in observational studies where both are unknown. This limitation hinders the practical applicability of Hoel’s theory.
It has been pointed out that Hoel’s theory neglects the constraints on coarse-graining methods, and certain coarse-graining methods can lead to ambiguity [78]. Additionally, the combination of some coarse-graining operations over states and coarse-graining operations over time does not exhibit commutativity. For instance, by assuming A m × n is a coarse-graining operation over states (merging n states into m states), and ( · ) × ( · ) is a coarse-graining operation over time (combining two time steps into one), the equation A m × n ( T P M n × n ) × A m × n ( T P M n × n ) = A m × n ( T P M n × n × T P M n × n ) does not always hold. This indicates that certain coarse-graining operations can result in a discrepancy between the evolution of macroscopic states and the coarse-grained states of the evolved microscopic systems. It implies the need for consistent constraints on coarse-graining strategies.
This means that solely maximizing EI may raise some problems, and further constraints must be added to the framework. We discuss this problem in Section 4.1.1.

3.6. Fernando E. Rosas’s Quantification of Causal Emergence

3.6.1. Basic Idea

In Hoel’s framework, it is essential to find a coarse-graining strategy in order to determine the occurrence of causal emergence, and the outcome is influenced by the choice of the coarse-graining method. Although it has been suggested by Hoel [19,25] that an optimal strategy can be identified by maximizing EI, certain issues have been raised by Dewhurst [77]. EI is a global measure because it requires the input variable X to be a uniform distribution over the whole domain [24]. However, many regions are not observable from the data. Consequently, there is a pressing need for an alternative theoretical framework for causal emergence that does not rely on a coarse-graining method.
In response, Fernando E. Rosas took an approach that did not require a coarse-graining strategy as a prerequisite, attempting to break down excess entropy—the mutual information between a system’s past and future states—into non-overlapping parts to identify the information components most relevant to causal emergence. To accomplish this, he relied on the partial information decomposition (PID) framework proposed by Williams and Beer, which provides a method for the non-overlapping decomposition of joint mutual information [29]. Below, we introduce Williams and Beer’s theoretical framework.

3.6.2. Partial Information Decomposition

The partial information decomposition (PID) framework investigates the general informational relationship between source variables and a target variable. To simplify the problem description without sacrificing generality, let us consider a system with two input variables ( X 1 , X 2 ) and one output variable (Y) as an example, as depicted in the Venn diagram below.
The mutual information between the target variable and individual source variables, I ( X 1 ; Y ) and I ( X 2 ; Y ) , as well as the mutual information between the target variable and the joint source variable, I ( X 1 , X 2 ; Y ) , exhibits a complex relationship. Intuitively, one cannot be converted to another. Nonetheless, it is an intuitive notion that the overlapped circular plates of I ( X 1 ; Y ) and I ( X 2 ; Y ) divide the oval region of I ( X 1 , X 2 ; Y ) into four adjacent non-overlapping regions, representing the three types of information components of I ( X 1 , X 2 ; Y ) :
I ( X 1 , X 2 ; Y ) = R e d ( X 1 , X 2 ; Y ) + U n ( X 1 ; Y | X 2 ) + U n ( X 2 ; Y | X 1 ) + S y n ( X 1 , X 2 ; Y )
Specifically:
  • R e d : Redundant information refers to the information held by both sources;
  • U n : Unique information refers to the information held by one source but not the other;
  • S y n : Synergistic information is the information held by all sources together, but not any individual one.
If we can identify variable representations corresponding to these information components, the non-overlapping feature in the Venn diagram indicates their independence from one another.
For a more intuitive understanding, let us consider a few simple toy examples.
Case 1, Y = X 1 = X 2 : This is a scenario in which a single source variable can predict the target variable, and the addition of another source variable does not enhance the prediction of the target variable. In this case, we have I ( X 1 , X 2 ; Y ) = R e d ( X 1 , X 2 ; Y ) , as depicted in Figure 9b.
Case 2, Y = X 1 X 2 , X 1 X 2 : In another scenario, neither of the source variables can predict the target variable individually, but together, they can predict the target variable synergistically. In this case, I ( X 1 , X 2 ; Y ) = S y n ( X 1 , X 2 ; Y ) , as depicted in Figure 9c.
Case 3, Y = X 1 , X 1 X 2 : In this scenario, the target variable can be predicted by one of the source variables but not the other, which implies I ( X 1 , X 2 ; Y ) = U n ( X 1 ; Y | X 2 ) , as depicted in Figure 9d.
For general cases, Williams and Beer presented a method for calculating redundant information, defined as R e d m i n . This method reflects the concept of redundancy, identifying it as the shared information across all sources, which is equivalent to the minimum amount of information contributed by any single source:
R e d m i n ( X 1 , X 2 ; Y ) y Y P ( Y = y ) min X i I ( Y = y ; X i )
Figure 9e presents an alternative way of visualizing the PID framework, known as the redundancy lattice [29]. In this representation, {12} denotes synergistic information, 1 and 2 denote unique information, and 1 2 denotes redundant information. In the subsequent sections, we demonstrate that the ϕ ID framework employs the notation of the redundancy lattice.
Although the PID framework remains compatible with scenarios involving more than two source variables, it should be noted that the corresponding Venn diagrams and redundancy lattices for these scenarios can become substantially more complicated and difficult to decipher, as discussed in [29].

3.6.3. Integrated Information Decomposition

The PID framework provides a useful framework for analyzing the non-overlapping information composition in a multivariate system. However, its application to causal analysis of dynamic systems is limited by the fact that it only allows for a single target variable. This limitation prevents the framework from fully capturing the transitions of multiple states across time steps. To address this challenge, Fernando E. Rosas developed Integrated Information Decomposition ( ϕ ID) [30], which takes its name from Integrated Information Theory (IIT) [21]. This extension of PID provides a more comprehensive method for analyzing dynamic systems.
To introduce Rosas’s framework clearly, we consider a system with only two variables. All the definitions and calculations can be generalized to systems with more variables.
The objective of the ϕ ID framework is to decompose excess entropy into non-overlapping information components. In a two-variable Markovian system, excess entropy is given by E = I ( X t 1 , X t 2 ; X t + 1 1 , X t + 1 2 ) , where X t 1 and X t 2 represent the current states, and X t + 1 1 , X t + 1 2 represent the future states. From a causation perspective, X t 1 and X t 2 represent causes, whereas X t + 1 1 and X t + 1 2 represent effects.
Rosas first noted that mathematically, the PID framework is symmetric with respect to the source and target variables. There are two viewpoints for analyzing the above Markovian system with PID. One viewpoint takes X t + 1 1 and X t + 1 2 together as the target variable (aiming to decompose the “cause” to elements), whereas the other viewpoint takes X t 1 and X t 2 together as the target variable (aiming to decompose the “effect” to elements). The redundancy lattices of these perspectives are illustrated in Figure 10a,b, and respectively, are referred to as “forward PID” and “ PID”.
Next, Rosas introduced the ϕ ID framework, consolidating forward PID and backward PID into a single framework. In this framework, the one-to-many relationship in PID was expanded to include many-to-many relationships. He built full connections between the elements of the redundancy lattices of forward PID and backward PID. This approach generates 16 relations between the source and target, as depicted by the colored lines in Figure 11a (each color corresponds to a specific source element of X t ). These relations are referred to as “ ϕ ID atoms” and can be represented as vertices in a lattice, as shown in Figure 11b. This more complex lattice is referred to as the double-redundancy lattice because it is a “product” of two redundancy lattices (of forward PID and backward PID).
In the double-redundancy lattice, each vertex is defined as a “ ϕ ID atom”, denoted as I α β , α , β A , where A = { { { 1 } { 2 } } , { 1 } , { 2 } , { 12 } } . For instance, the ϕ ID atom of the vertice { 12 } { 12 } is denoted as I { 12 } { 12 } . The ϕ ID atom represents the broken-down information transmitted from the cause elements to the effect elements, and their cumulative sum amounts to the excess entropy of the entire system. Consequently, the formal definition of Integrated Information Decomposition ( ϕ ID) is:
E = I ( X t ; X t + 1 ) = α , β A I α β
The main advantage of ϕ ID is that it can provide a more fine-grained analysis of complex systems compared to PID by introducing the temporal dynamic. By decomposing the mutual information of the system at different times, ϕ ID provides another approach for the quantitative study of causal emergence.

3.6.4. Reconciling Emergences

Building upon the concept of synergistic information in the PID framework, Rosas introduced a quantitative definition of causal emergence using the ϕ ID framework to tackle the challenge of identifying an appropriate coarse-graining strategy. The definition includes two aspects: firstly, determining whether the system has the capacity to generate causal emergence, and secondly, assessing the occurrence of causal emergence given a specific macroscopic feature.
Regarding a system’s capacity to exhibit causal emergence, this definition establishes a connection between causal emergence and synergistic relationships among variables across different time points. Consequently, a system denoted as X t is said to possess the capacity for causally emergent features if and only if:
S y n ( X t ; X t + 1 ) > 0
In this context, causal emergence is understood as the synergistic effect between variables at preceding and subsequent moments within a Markovian dynamics system.
Then, Rosas further divided causal emergence into two parts in the ϕ ID framework, downward causality and causal decoupling, based on the distinct characteristics of the information atoms. Among the sixteen ϕ ID atoms obtained by decomposing the mutual information I ( X t ; X t + 1 ) using ϕ ID, there are four information atoms corresponding to the synergistic effect, which is regarded as the composition of causal emergence. These atoms are denoted as I { 12 } α ( X t , X t + 1 ) , α A = { { { 1 } { 2 } } , { 1 } , { 2 } , { 12 } } .
Downward causality is denoted as D ( X t , X t + 1 ) , encapsulating the information atoms that exclusively manifest a synergistic effect in forward PID, as shown in Figure 12 and Figure 13:
D ( X t , X t + 1 ) : = α A / { 12 } I { 12 } α ( X t , X t + 1 )
While causal decoupling is denoted as G ( X t , X t + 1 ) , pinpointing the specific information atom where both forward and backward PID exhibit synergy:
G ( X t , X t + 1 ) : = I { 12 } { 12 } ( X t , X t + 1 )
Since ϕ ID is a non-overlapping decomposition of all information, this classification takes into account all cases of causal emergence, that is,
S y n ( X t ; X t + 1 ) = G ( X t , X t + 1 ) + D ( X t , X t + 1 ) .
In addition to that, Rosas also provided an approach to quantify the causal emergence of a specific macro-variable, i.e., a coarse-graining strategy. If a system has the capacity to generate causal emergence, there can be some macroscopic features that exhibit causal emergence. A feature variable V is said to be supervenient on the underlying system if it does not provide any predictive power for future states at times t + 1 when the complete state X of the system at time t is known with perfect precision. This is equivalent to V t being statistically independent of X t + 1 given X t . Then, for a system described by X t , a supervenient feature V t is said to exhibit causal causation if:
U n ( V t ; X t + 1 | X t ) > 0 .
For this definition, the system’s capacity of causal emergence is required, where S y n ( X t ; X t + 1 ) > 0, since U n ( V t ; X t + 1 | X t ) S y n ( X t ; X t + 1 ) holds for any supervenient feature V t . Corresponding to the classification of the system’s capacity, the downward causation (as indicated by the red dotted line in Figure 13) of a feature variable V exists when
U n ( V t ; X t + 1 1 | X t ) > 0 o r U n ( V t ; X t + 1 2 | X t ) > 0
and causal decoupling (as indicated by the blue dotted line in Figure 13) exists when
U n ( V t ; V t + 1 | X t , X t + 1 ) > 0 ,
which also depends on the capacity of the system. Furthermore, V t is said to have pure causal decoupling if U n ( V t ; X t + 1 α | X t ) = 0 and U n ( V t ; X t + 1 | X t ) > 0 . If all the emergent features exhibit pure causal decoupling, the system is said to be perfectly decoupled.
Although a rigorous quantitative definition for causal emergence was proposed, the mathematical formulations used in ϕ ID can be complex and computationally demanding, making it difficult to apply the method to real-world systems. Also, In addition, the inconsistency in PID calculation causes the definition of causal emergence to depend on a specific PID calculation. To address these problems, Rosas relaxed the calculation of causal emergence and established a identification criteria based one the sufficient conditions of causal decoupling and downward causation.
Specifically, to avoid exploring the specific quantitative methods of synergistic and redundant information, this criteria repeatedly subtracts redundant information, making the result a sufficient condition for causal emergence, which lose some of its generality but improves the reliability. The three indicators to be used are:
Ψ t , t + 1 ( V ) : = I ( V t ; V t + 1 ) j I ( X t j ; V t + 1 ) ,
which measures the mutual information between macro-variables across two time steps minus the mutual information between each micro-state and macro-state.
Δ t , t + 1 ( V ) : = max j I ( V t ; X t + 1 j ) i I ( X t i ; X t + 1 j ) ,
which is the maximum of the difference between mutual information of V t and X t + 1 j and the summation of the mutual information of X t i and X t + 1 j .
Γ t , t + 1 ( V ) : = max j I ( V t ; X t + 1 j ) ,
which is the maximum mutual information between V t and X t + 1 j . For the above indicators, V is a predefined macro-variable, but the specific method for finding such a macro-variable is not discussed in [24].
The specific usage of the indicators are as follows: Firstly,  Ψ t , t + 1 ( V ) > 0 is a sufficient condition for the causal emergence of V t . Secondly, Δ t , t + 1 ( V ) > 0 is a sufficient condition for V t to show downward causation. Thirdly, Ψ t , t + 1 ( V ) > 0 and Γ t , t + 1 ( V ) = 0 together constitute a sufficient condition for causal decoupling.
In summary, Rosas proposed an approach to quantitatively characterize and classified causal emergence based on ϕ ID by establishing the relationship between causal emergence and the synergistic effect of variables at different time points and further categorized causal emergence. The definition not only provides an objective assessment of a system’s capacity for causal emergence but also enables the measurement the causal emergence associated with a specific macro-feature. His significant contributions include bridging the gap between the study of causal emergence and quantitative empirical research, classifying the different types of causal emergence, and complementing philosophical discussions on the topic.
The PID and ϕ ID frameworks hold the potential for explaining data in various applications. For instance, Luppi et al. recently employed the ϕ ID method to analyze brain BOLD signals [106], aiming to identify the synergistic core of the brain. Their findings revealed that synergistic information facilitates the integration of different brain regions, whereas redundancy contributes to robustness. This study offers valuable insights into understanding the underlying mechanisms of the brain.
The advancement of partial information decomposition techniques allows for further analysis of the mutual information between variables, enabling a deeper understanding of system properties from multiple perspectives. In a study by Varley et al. [107], the authors applied partial information decomposition to decompose the mutual information of a system. They calculated an indicator of synergy bias to assess how synergistic information is distributed across different levels of the system using the method proposed by Williams and Beer [29]. A higher synergy bias indicated a greater amount of partial information involved in synergistic relationships. Subsequently, the authors observed that in certain systems exhibiting causal emergence, when the system is simplified or reduced, the synergy bias increases. This suggests that as we coarse-grain the system, partial information undergoes a transformation from redundancy to synergy. The overall conclusion drawn was that emergence can be understood as a form of information conversion.

3.6.5. Causal Emergence Identification from Data

In the previous sections, we introduced several works on quantifying emergence through causality and other information-theoretic concepts. All of these works tried to propose quantitative measures, conceptual frameworks, and numeric examples based on Markovian dynamics for causal emergence. However, in practical applications, for a theoretical framework to be implemented, we need to automatically identify causal emergence from real data, especially time-series data of dynamic systems, and provide explanations for the results.
The first method for causal emergence identification was introduced in Rosas’s paper on causal emergence [24] and involves the three indicators defined by Equations (41)–(43), as mentioned in Section 3.6.4. The identification criteria were exemplified in three case studies, leading to the following conclusions: particle collisions emerge as a distinctive feature within Conway’s Game of Life, flock dynamics emerge as a characteristic feature in simulated bird behavior, and the representation of motor behavior in the cortex emerges from neural activity.
Although these three indicators avoid the problem of redundant information calculation, it is important to note that they serve as sufficient conditions, rather than as definitive proof of emergence. In other words, indicators greater than 0 can suggest the presence of emergence, but indicators less than 0 do not necessarily imply the absence of emergence. The construction of this indicator faces challenges in identifying emergence in systems with significant redundant information or a large number of variables, which is often the case in many real-world systems. Additionally, a limitation of this method is the requirement of a predetermined coarse-grained variable V, and different choices of this variable can significantly impact the results. As a result, the development of an automatic coarse-graining strategy based on data remains an unresolved issue.
As a result, the current theoretical frameworks for causal emergence lack a practical and effective identification algorithm. Although previous studies have proposed methods based on static network structures and approximations for information decomposition, there is still a need for a comprehensive approach that can be applied to general Markovian dynamic systems. One of the main challenges is the necessity to search for all possible functions of coarse-graining or decomposing subsets within the data to identify causal emergence in complex systems. However, conventional numerical methods are unable to handle the computational costs associated with such an extensive search in a vast functional space. Therefore, the development of new methods is imperative to address these issues.
In Section 4, we explore the application of machine learning techniques to address the challenge of identifying causal emergence within time-series data.

3.6.6. Comparison with Hoel’s Framework

When comparing Hoel’s framework to Rosas’s quantification framework for causal emergence, several clear advantages can be observed in the latter. Firstly, Rosas’s theory does not require a predetermined coarse-graining method, making it more mathematically rigorous and formal. Secondly, it offers a detailed decomposition of causal emergence, specifically downward causation and causal decoupling. Lastly, it effectively avoids cases of fake causal emergence, where the macro-variable depends solely on unique or redundant information from the micro-variables.
However, there are also some disadvantages to consider. Firstly, in order to obtain the full information lattice, a systematic iteration of all variable compositions is required. Additionally, despite the use of Formula (38), it is necessary to define a macro-variable. Unfortunately, the authors do not provide any method to identify such a variable. Secondly, all the mutual information and its decomposition are based on correlations rather than causality. It is crucial to discuss how to incorporate causal elements, such as intervention and counterfactuals, into the framework.
Finally, the issue of identifying whether causal emergence occurs in a system based on the given time-series data of its behaviors has not been addressed in the preceding discussion. To address this problem, the application of emerging technologies in machine learning and artificial intelligence is required. These technologies can provide valuable tools and techniques for detecting and analyzing causal emergence.

4. Causal Emergence and Machine Learning

Recently, newly emerging machine learning technologies have made significant breakthroughs in addressing a range of important and challenging problems [108,109,110,111,112,113,114,115,116]. Examples include defeating human champions in complex games like Go [108,109], predicting the intricate structures of protein folding [110], and generating human language using large language models [117]. These achievements have been made possible through the application of machine learning methods, which leverage well-designed neural network architectures and automatic differentiation techniques.
However, there is a crucial limitation of conventional machine learning: it can only capture the information or associations within the data, without being able to uncover the underlying causal relationships. According to the theory of causal hierarchy by Judea Pearl [15,16,118], causality is very different from association because the former always represents a more stable relationship that is invariant in different environments, whereas the latter may be more dependent on contexts and reflects the limitation of the data. Therefore, it is necessary to develop new machine learning frameworks to incorporate the consideration of causality [33,118,119]. Numerous studies have provided evidence to show that incorporating aspects of causality can lead to improved performance in machine learning tasks, including out-of-distribution generalization, adaptation to diverse environments, and handling interventions [118,120,121].
Machine learning and causal inference are also connected to causal emergence in two aspects, which we call “causal emergence with machine learning” and “causal emergence for machine learning”. On one hand, machine learning can be employed to address the challenge of identifying causal emergence from data, referred to as “causal emergence with machine learning”. On the other hand, causal emergence theory and effective information (EI) measures have potential applications in the field of machine learning, known as “causal emergence for machine learning”. However, the extension depends on a deeper understanding of how causal emergence theory is related to causality.

4.1. Causal Emergence with Machine Learning

In Section 3.6.5, we discussed the challenge of identifying causal emergence. Although Rosas et al. presented a method to address this issue by calculating an approximate measure of causal emergence based on their theoretical framework (Equation (41)), the method requires the availability of a macro-variable, denoted as V t , which needs to be defined in advance. However, defining this variable in a general sense when confronted with data remains challenging. Hence, the question arises of whether we can leverage machine learning to automatically learn the macro-variable. In this section, we explore two recent approaches that utilize machine learning and neural network techniques to address the problem of causal emergence identification from the perspective of Heol et al.’s theoretical framework for EI maximization. These methods aim to learn the coarse-graining strategy, macro-variable, and macro-dynamics directly from the data, eliminating the need for explicit definitions. By leveraging the power of machine learning, we can potentially overcome the limitations of manual variable definition and enable a more automated and data-driven approach to identifying causal emergence.

4.1.1. Neural Information Squeezer (NIS)

The first work we introduce has just been published in this special issue. In this paper, the authors formulated the problem of causal emergence identification in continuous Markov dynamics. Suppose the time-series data x 1 , x 2 , , x T are generated by a stochastic dynamical system continuously, which can be described by a differential equation,
d x d t = g x ( t ) , ξ ,
where ξ is a Gaussian noise. The time series { x t } represents the observations of the micro-states of the system. The problem we confront is identifying whether causal emergence occurs for the original system. We also want to know the appropriate coarse-graining strategies, the macro-dynamics corresponding to the chosen coarse-graining method, and the dimension of the phase space at the macro-level when the causal emergence occurs.
To address this problem, the authors proposed a mathematical framework by converting the original issue into an optimization problem for functions. Specifically, the goal is to find the appropriate coarse-graining strategy ϕ q , defined on R p R q , where q < p is the dimension of the macro-states. Additionally, the aim is to determine the macro-level Markov dynamics f, defined on R q R q with the random noise ζ , by maximizing J —the dimension-averaged EI of the macro-dynamics (see Equation (27)). This is expressed as,
max J ( f ^ ϕ q ) ,
However, this problem has a trivial solution, e.g., ϕ is a constant map for all micro-states, and the macro-dynamic is an identity. Surprisingly, this solution has a large EI because the identity map has the largest determinism and lowest degeneracy among all q-dimensional functions. This is trivial because too much information has been abandoned by the coarse-graining strategy such that the macro-dynamic is useless.
To address the problem, the authors added a reasonable constraint to the original optimization framework, called the effectiveness requirement. A coarse-graining strategy and its corresponding macro-dynamics are considered effective if there exists another decoarse-graining function ϕ such that the functions ϕ , f, and ϕ can predict the micro-state, instead of the macro-state, in the next time step based on the state in the previous step. The constraint can be written as,
| | ϕ ( f ( ϕ ( x t ) ) ) x t + 1 | | < ϵ ,
for all time steps t, where ϵ is a given constant. This way, an effective coarse-graining strategy and macro-dynamics not only maximize the EI but also reproduce the original micro-dynamics as much as possible.
This constraint is an important complement to the framework for EI maximization because the decoarse-graining function ϕ can map macro-states back to micro-states, addressing the problem of the ambiguity of macro-states. This constraint also ensures commutativity. To demonstrate this, we can apply ( ϕ ) 1 ϕ to both sides of Equation (46), resulting in | | f ( ϕ ( x t ) ) ϕ ( x t + 1 ) | | < ϕ ( ϵ ) .
However, the problem is still hard to solve. In [31], the authors proposed a two-stage method, which can minimize the prediction error under a given dimension q of macro-states in the first step and maximize EI for different values of q in the second step. The first step can be solved by training a neural network, and the second step converts the complex functional optimization problem into a line search in one-dimensional space.
To realize the abstract mathematical framework, the authors proposed an encoder-dynamics learner-decoder framework for neural networks, named the Neural Information Squeezer (NIS), vividly describing the basic working principle of the framework as depicted in Figure 14.
In order to reduce the complexity of the encoder and decoder, corresponding to ϕ and ϕ , respectively, the authors used an invertible neural network to implement the encoder and inverted the input-output workflow of the same neural network to implement the decoder. Concretely, the encoding process was decomposed into two computation steps: information conversion with the invertible neural network ψ : R p R q on a p-dimensional input space, and information discarding with a simple projection operation. This is expressed as:
ϕ ( x ) = χ q ( ψ ( x ) ) ,
where χ q is the projection operator that projects a p > q -dimensional vector into a q-dimensional vector, retaining dimensions from 1 to q. The decoding process also consists of two stages: randomly sampling Gaussian noise to complete the details of macro-states, and information conversion but in an inverted way, that is,
ϕ ( x ) = ψ 1 ( x ζ ) ,
where ζ N ( 0 , I p q ) is a p q -dimensional standard Gaussian random vector, and ⨁ is the operation of vector concatenation.
The adoption of an invertible neural network in the framework serves two main purposes. Firstly, by inverting the workflow of the neural network ψ to obtain the decoder ϕ , a significant reduction in the number of parameters is achieved. This reduction can lead to improved efficiency and computational benefits. Secondly, the use of invertible functions in the neural network provides favorable mathematical properties that facilitate the analysis of the entire framework in a tractable manner. These properties enable researchers to gain insights and understand the characteristics of the framework more easily.
This architectural design with an invertible neural network not only reduces the complexity of computation but also separates the input information from the micro-states into noise and EI for prediction. The authors proved a mathematical theorem stating that, as the neural network converges through training, the mutual information between adjacent macro-states tends to approach the mutual information of two adjacent-step micro-states in the data. This means that all the information discarded by the encoder is almost irrelevant to the prediction. All the useful information converges into the information bottleneck within the macro-level dynamics learner. Another theorem demonstrates that the degree of information compression for micro-states increases as we reduce the dimension of the macro-states. Thus, the macro-state dynamics learner, acting as a narrower information bottleneck, progressively discards more and more extraneous information from the original micro-state data.
To address the problem of causal emergence identification, we first optimize the neural networks, and the dimension-averaged EI, J q , can be obtained for different q. When q = p , the derived J p is the dimension-averaged EI for the micro-dynamics. Finally, the calculation of
Δ J = J q J p
is the measure of dimension-averaged causal emergence (see Equation (28)). If Δ J > 0 , we can state that causal emergence occurs.
Numeric examples are provided to show the effectiveness of the whole framework. Interestingly, the Neural Information Squeezer can handle time-series data with discrete micro-states, even though the whole framework was initially designed for continuous dynamical systems. Particularly, the model determines the same coarse-graining strategy and macro-level dynamic as illustrated in the example of a boolean network with four nodes in Hoel’s original paper, although no information about node grouping and state mapping was provided in the framework.
It is also interesting to compare the NIS with the frameworks and models mentioned in previous sections. Compared to the theory of computational mechanics, the NIS can be treated as a kind of ϵ -machine because any encoded macro-states can be understood as states. When the whole framework is well trained, such that it can precisely predict future micro-states, the encoded macro-states converge to effective states, which can be treated as causal states in computational mechanics. However, the objective that maximizes the effective information (EI) in the NIS has no correspondence.
The NIS also shares a similarity with G-emergence because it adopts the ideas of Granger causality: the effective macro-state is optimized by predicting the micro-state in the next time step. However, there are several obvious differences between the two frameworks. In the theory of G-emergence, the macro-state must initially be selected manually, although it is optimized automatically. In addition, the NIS uses neural networks to predict future states, whereas G-emergence uses auto-regressive techniques to fit the data. Due to the universal approximate theorem (see [122,123]), neural networks are more advantageous.
The NIS provides a solution for causal emergence identification from data, and it is the first solution for addressing the problem of the automatic construction of the coarse-graining strategy and macro-dynamics from data for causal emergence identification.
Another important benefit of the NIS is that it essentially addresses a generalized causal emergence identification problem, extending beyond mere justification. This is because following training, the NIS not only identifies whether causal emergence occurs but also obtains a neural network for coarse-graining the data and another for simulating the macro-dynamics. This can provide us with richer information compared to a mere justification.

4.1.2. Neural Information Squeezer Plus (NIS+)

One of the biggest problems of the NIS is that it does not directly address the EI maximization problem by optimizing the neural networks (encoder, decoder, and dynamics learner). Instead, it searches for the optimal dimension of the macro-state space q such that EI can be maximized. Therefore, for a given q, the neural networks are optimal for reconstructing the micro-dynamics (minimizing Equation (46)) but not for EI. In addition, the NIS cannot support complex computations such as coarse-graining functions involving multiple steps of information conversion and discarding.
To address these problems, the NIS+ was proposed, as discussed in [32]. Mathematically, the problem of EI maximization can be transformed into a machine learning problem based on the definition of EI and the variational inequality and probability reweighting technique. After this conversion, the minimization problem can be solved by training three neural networks ψ ω , f θ , and g θ with parameters ω , θ , and θ , respectively. Formally, the minimization problem without constraints can be written as:
min ω , θ , θ t = 1 T 1 w ( x t ) | | y t g θ ( y t + 1 ) | | + λ | | x ^ t + 1 x t + 1 | | ,
where y t = ϕ ( x t ) = P r o j q ( ψ ω ( x t ) ) and y t + 1 = ϕ ( x t + 1 ) = P r o j q ( ψ ω ( x t + 1 ) ) are the macro-states. λ is a Lagrangian multiplier, which is regarded as a hyperparameter in experiments. w ( x t ) is the inverse probability weight, which is defined as:
w ( x t ) = p ˜ ( y t ) p ( y t ) = p ˜ ( ϕ ( x t ) ) p ( ϕ ( x t ) ) ,
where p ˜ is the new distribution of macro-states y t after intervention for d o ( y t U q ) , and p is the natural distribution of the data. In practice, p ( y t ) is estimated through kernel density estimation (KDE) [124]. The approximated distribution, p ˜ ( y t ) , is assumed to be a uniform distribution, which is characterized by a constant value. Consequently, the weight w is computed as the ratio of these two distributions. Theorems have been proven to ensure the correctness of this conversion. The overall framework for the NIS+ is depicted in Figure 15.
Additionally, to apply the NIS+ to data generated by complex systems like multi-agent systems and cellular automata, the structures of the encoder can be extended by defining two kinds of combinations: stacking the basic encoder and concatenating the basic encoder. Here, a basic encoder is the composition of an invertible neural network and a projection operation. By inverting the directions of all the basic encoders and supplementing empty inputs with Gaussian noise vectors, we can obtain the corresponding decoder.
Serious numerical experiments have demonstrated that the NIS+ can automatically learn emergent dynamics and the coarse-graining strategy from data and can integrate these results to identify causal emergence.
For instance, in experiments utilizing training data generated by introducing Gaussian noise to SIR dynamics, the NIS+ demonstrated superior performance compared to the NIS in recovering the vector field of derivatives that aligned with the actual macro-level SIR dynamics. To assess the ability of the NIS+ to learn emergent dynamics, the authors generated data based on macro-states using the SIR dynamics as the ground truth. However, the NIS+ cannot directly utilize this data; only micro-level noisy data can be employed. These micro-level data were generated by introducing Gaussian noise to the macro-states.
In the experiments conducted with the Boid model, training data were generated using the renowned Boid model, which simulates the herding behavior of birds. Both the NIS+ and NIS demonstrated the ability to automatically learn macro-level states and dynamics. In this simulation, the authors divided all artificial birds (boids) into two distinct groups. Interestingly, the learned macro-states could also be categorized into two groups. Within each group, the predicted trajectories generated by the learned macro-dynamics of this group accurately followed the center of the boid groups.
The ability to capture emergent patterns was verified through experiments conducted in the “Game of Life”. The NIS+ demonstrated the capability to automatically discover both static and dynamic patterns, such as the “glider”, within the learned latent space. Additionally, when clear emergent patterns appeared in the data, larger effective information (EI) was observed for the learned emergent macro-dynamics.
Across all the training data generated through simulations, the NIS+ outperformed comparative models, including the NIS, variational autoencoders, and feed-forward neural networks, in experiments testing the model on data distributions that differed from the training data.
The paper also presented results on real data. Two sets of real fMRI time-series data from human subjects were used. The first dataset, AOMIC ID1000 [125], contained fMRI scanning data collected while subjects watched the same movie clip. As a comparison, another fMRI dataset, AOMIC PIOP2 [125], containing resting-state data from 50 subjects, was also utilized. The primary training objective was to predict the fMRI signal at the next time step using the learned macro-dynamics and decoder while simultaneously maximizing the effective information (EI) of the macro-level dynamics. Ultimately, a one-dimensional macro-state was found to represent all 100-dimensional micro-state data. Through attribution analysis, the authors found that micro-level signals with larger weights contributing to the one-dimensional macro-state were located in areas associated with visual tasks. However, in the comparison dataset with resting-state data, seven-dimensional macro-states were found to represent the micro-states, and attribution experiments revealed that micro-states with larger weights were distributed across different brain areas.
After the macro-dynamics and coarse-graining strategy were learned, the EIs at both the macro-level and micro-level could be calculated so that causal emergence could be quantified. The experiments showed that the measure of causal emergence depends on different types of noise. If noise was added to the observational data (micro-state), the degree of causal emergence increased with the level of noise. In contrast, if the noise was from the dynamical mechanism, i.e., added to macro-states, the degree of causal emergence decreased with the level of noise. This implies that coarse-graining can learn to eliminate noise in raw data so that better macro-dynamics with larger EI can emerge. However, if the noise is from the internal dynamics, it cannot be removed through coarse-graining.

4.2. Why Is Causal Emergence “Causal”?

In the previous sections, we discussed two frameworks that utilize machine learning and neural network techniques to address the challenge of causal emergence identification. Both frameworks place significant emphasis on maximizing EI to learn the coarse-graining strategy and macro-dynamics. Notably, the NIS+ framework extends the NIS approach to achieve real EI maximization, whereas the NIS relies on adjusting the hyperparameter q to optimize EI.
Moreover, the NIS+ has demonstrated an additional capability to generalize data distributions that differ from the training data. This suggests that the NIS+ can learn an invariant causal mechanism that is independent of the input distribution shifts through EI maximization. However, before extending this method to other machine learning scenarios, an essential question needs to be addressed: Is there a genuine relationship between causal emergence, EI maximization, and causality itself?
In the previous sections, we introduced two theoretical frameworks of causal emergence: Hoel’s framework and Rosas’s framework. In both frameworks, the measurement of causal emergence heavily relies on mutual information. However, it is crucial to acknowledge that mutual information primarily quantifies correlation rather than causality. So why do the authors assert that their theories pertain to “causal” emergence?
The key point lies in the research objectives of these frameworks, which focus on a Markov dynamical system. In this context, the dynamical mechanism is assumed to be known, and confounding variables are not considered. Under this assumption, measures related to correlation can effectively indicate causality. While mutual information itself is not a direct measure of causality, it can serve as a proxy for causality when the underlying dynamics are well understood and confounding factors are absent. Therefore, in the context of these frameworks, the use of mutual information as a measure for causal emergence is justified because it captures the relationships between variables in a way that aligns with the assumed known dynamics of the system.
When comparing the two frameworks, Hoel’s framework can be considered more “causal” than Rosas’s framework. This is because Hoel’s framework introduces the do operator in the definition of effective information (EI), as expressed in Equation (7). The do operator allows for interventions and represents a causal manipulation of variables, which strengthens the causal interpretation of the framework. Furthermore, the significance of EI lies in its deep connections with other measures of causation proposed in various fields [28]. According to the information in Section 3.5.3, these measures aim to quantify the causal impact of an intervention or treatment on an outcome. By incorporating the do operator, EI aligns with these causal effect measures and provides a valuable tool for assessing causal relationships.
Furthermore, let us delve into the application background of the NIS and NIS+ to understand why these frameworks, based on EI maximization, are considered causal. Firstly, the input data in both frameworks, denoted as x t , are generated by a Markovian dynamical system, with no other confounding factors considered, apart from the system itself. As a result, the state of two consecutive time steps, represented as x t and x t + 1 , without loss of generality, can be modeled by a simple causal diagram x t x t + 1 due to the Markovian property. This causal diagram naturally adheres to the exogeneity assumption in causal inference [16], indicating the absence of any confounders between the two states. Consequently, the “do” operator in EI can be replaced with conditional probability, which can be estimated from observations.
Taking all these aspects into consideration, we conclude that the theoretical frameworks of causal emergence and the machine frameworks for causal emergence identification, namely the NIS and NIS+, are indeed related to causality. Moreover, they can be extended to other scenarios within the realm of machine learning. We further explore and discuss this point in the subsequent sections.

4.3. Causal Emergence for Machine Learning

In this sub-section, we introduce how causal emergence can enhance machine learning in out-of-distribution scenarios. It turns out that the do intervention introduced in EI captures the causal dependency from the data generation process and thus complements correlation-based machine learning algorithms.

4.3.1. Out-of-Distribution Generalization

As demonstrated in the previous subsection, the measurement of causal emergence by vanilla effective information [19] is defined for stationary dynamics systems in a self-supervised fashion since the input (cause) space and the output (effect) space share the support. However, Hoel generalized effective information to general input-output systems [26] (see Section 3.5.5), where the cause and the effect space take arbitrary support, which could either be discrete or continuous. Empowered by the general notation of EI, causal emergence could be applied in supervised machine learning to evaluate the strength of causation between the feature space X (or its learned representation) and the target space Y , thus enhancing the prediction from the cause (feature) to the effect (target). It is worth noting that a direct fitting from X to Y on the observations suffices for common prediction tasks with an i.i.d. assumption, implying that the training and test data are independently and identically distributed.
However, if samples are drawn from outside the training distribution, it is essential to learn a representation space that generalizes from the training to the test environment, known as out-of-distribution (OOD) generalization. Due to the common belief that causal relations generalize better than statistical correlations [126], causal emergence theory could serve as a criterion for the causation embedded in the representation space. The occurrence of causal emergence indicates the revelation of potential causal factors of the target, thus producing a robust representation space with regard to out-of-distribution generalization.

Introduction to OOD

Before the detailed discussion of the connection between causal emergence and OOD generalization, we begin with a brief introduction to the latter topic. Out-of-distribution generalization toward arbitrary distributional shifts is generally considered impossible [127]. Therefore, it is commonly assumed that there exists an underlying causal mechanism governing the data generation process, which remains invariant to distributional shifts. Formally, according to Pearl’s Structural Equation Model [16], which causally formulates data generation, the target Y is supposed to be generated from a part of the feature X, i.e., Y = f ( X c , ϵ ) , where X c represents the selected features or a representation of X, and ϵ is a random noise independent of ( X , Y ) . Obviously, X c is the true cause of the target, and the causal mechanism f is presumed to be invariant across training and test environments. On the other hand, the raw feature X might comprise a mixture of causal variables and non-causal variables. The latter could arise from confounding, selection bias, or anti-causal mechanisms, all of which introduce spurious correlations between the target and non-causal variables. The strength and direction of these spurious correlations vary across environments, resulting in distributional shifts that hamper the out-of-distribution performance of machine learning techniques. As a remedy, a prominent branch of OOD algorithms [126,128,129,130] learns an invariant representation space distilling the causal variables from the raw feature space. Typically, a metric of the representation is devised to verify the stability of the correlation between the learned representation space and the target and is adopted as a regularization for the optimization objective to encourage the representation learner to extract causal variables from features. The paradigm for invariant representation learning can be expressed as [131]:
min f , ϕ R ( f ( ϕ ( X ) ) , Y ) + λ r e g ( ϕ ) ,
where ϕ ( X ) is the learned representation, f is the learned predictor, R is the task-specific risk function, and r e g is the regularization.
Representation learning-based OOD algorithms are diversified in their constraints on the representation space characterized by the regularizers. Each metric of the representation fits a specific data generation pattern and does not guarantee to filter out the causal variables when the setting is violated. However, causal emergence potentially offers a united metric of representation for OOD generalization rooted in causality theory.
Before diving into the application of causal emergence, we first discuss the representative regularizers of representation learning and their limitations. Domain-invariant representation learning proposes an aligned feature space with the constraint g ( X ) E , where E denotes the environment index for training and test environments. The regularization is implemented by the metric of Maximum Mean Discrepancy (MMD) distance of representation across environments [132] or an adversarial classifier for the environment index [130]. However, the alignment constraint provides a generalization guarantee only for a covariate shift [129], where the distribution shift is restricted to the features X without perturbing P ( Y | X ) . On the other hand, invariant risk minimization [126,133] pursues an invariant conditional distribution P ( Y | g ( X ) ) of the representation across environments, which translates to Y E | g ( X ) . Invariant risk minimization has been proven to recover the causal variable in a data generation process where the non-causal variable is anti-causally generated by the target but is reported to fail under a covariate shift [134]. Further, conditional invariant representation [128,132] is designed specifically for image classification where the target is modeled as the cause of the feature instead of the effect. The conditional independence g ( X ) E | Y is enforced on the representation space.
It has been theoretically proven that any statistical independence regularization is valid for a subgroup of Structural Equation Models, and it does not generalize across all data generation processes [135]. As a result, the metric of representation manages to capture the causation between the representation space and the target variable only if a specific causal graph of data generation is given, hindering the general applicability of invariant representation learning. Worse still, the existence of the environment index E in the independence regularizers introduces the necessity of multiple environments for training procedures, increasing the data collection and labeling expenses. To this end, there is active demand for a unified and general metric of representation for OOD generalization, for which causal emergence might be a good candidate.

Causal Emergence and OOD

The link between OOD generalization and causal emergence can be attributed to the do intervention of effective information. By definition [16], intervening in the feature distribution with d o ( X ) modifies the structural equations of all the variables in the feature space to a constant value, whereas the equation for the target remains intact as Y = f ( X c ) . On one hand, the assumption of the causal mechanism’s invariance to distributional shift is respected since the causal correlation between causal variables and the target is reserved. On the other hand, the spurious correlation between non-causal variables and the target is removed by the intervention. For example, a non-causal variable anti-causally correlated with the target leads to a structural equation as X n = h ( Y , ϵ ) , and d o ( X n ) replaces the equation with X n = U , where U U m a x , and U m a x represents the maximum entropy distribution on the domain of X n . As a result, X n is no longer caused by Y, and the spurious correlation vanishes. In another example, if a non-causal variable is correlated with the target by a confounder E (probably in the feature space), the structural equation takes the form of X n = ϕ ( E , ϵ ) and Y = f ( X c , E , ϵ ) . Then, the do intervention would modify the equation to X n = U and confounding ceases to exist. In summary, the intervention introduced by EI captures the causal correlation between features and the target while suppressing the spurious correlation. It renders EI an ideal metric to assess the causality contained in the representation space and, thus, the generalization of the induced algorithm.
Recall that an increment in effective information implies the occurrence of causal emergence. Therefore, we propose a conjecture that out-of-distribution generalization could be achieved simultaneously with maximizing EI (or normalized EI) for representation learning. Notably, Shannon’s mutual information holds the property that any representation has no gain of mutual information with the target over the raw feature space, i.e., I ( g ( X ) ; Y ) I ( X ; Y ) , as per the data processing theorem. Thus, direct fitting inclines to absorb as much information with the target as possible into the representation space. In contrast, EI (or normalized EI) would possibly peak at a medium stage of abstraction of raw features, coinciding with the philosophy of OOD generalization that less could be more. Ideally, at the peak of EI (or normalized EI) where causal emergence occurs, all non-causal features are excluded, and the causal features are revealed, resulting in the most informative representation while remaining invariant to distributional shifts.
The removal of spurious correlations and recovery of causal mechanisms by manipulating feature distributions are principles shared by the reweighting technique widely adopted in the OOD literature. For instance, stable learning [136,137,138] is designed for scenarios where collinearity exists in input variables, causing spurious correlations between non-causal variables and the target. It achieves this by reweighting training samples to decorrelate the features, thus reducing collinearity and eliminating spurious correlations. Similar to EI, stable learning is free of environment index labeling. Further, both sample reweighting and feature decorrelation share the philosophy of distribution intervention. In this sense, EI could be viewed as an information-theoretic abstraction of reweighting-based debiasing techniques for OOD generalization.

5. Discussion and Perspectives

We have presented several quantitative theoretical frameworks on causal emergence. However, there are numerous unexplored problems and implications that warrant further discussion. In this section, we address four key topics for future research: causal emergence and causal representation learning, ontological and epidemiological causality and emergence, potential applications in complex systems, and understanding complex systems from the perspective of causal emergence.

5.1. Causal Emergence and Causal Representation Learning

Causal representation learning (CRL) is an emerging field in artificial intelligence (AI) [33] that combines two important fields in AI: representation learning and causal inference. Representation learning aims to extract important features (or representations) hidden in data to make predictions automatically [139]. It can be regarded as a typical application of deep learning and has achieved remarkable success in various domains such as image classification [111], face recognition [112], language understanding [113,114,115], and game playing [109,116]. However, conventional representation learning suffers from a critical limitation: it can only capture the information of associations within the data but not the underlying causal relationships. Therefore, it is important to consider causality in representation learning.
To address the problem, CRL tries to combine the advantages of the two sub-fields: representation learning and causal inference to extract the important features and the relationships with causation behind data automatically [33].
A typical CRL scenario is depicted in Figure 16. Suppose there is a set of variables and causal mechanisms (including causal graphs and structural equations) that describes how the world of robotic arms and colorful blocks works. However, these mechanisms are not directly observable for the agent undergoing causal representation learning. Instead, what the agent can observe is a set of images generated by robotic arms and blocks. The objective of a causal learning framework is to extract the causal variables and mechanisms from the observed images, and the variables and mechanisms can be used to implement other downstream tasks, for example, causal relationship discovery [140] and predictions in different environments [120] or domains [118,121].
When comparing Figure 14 and Figure 15 with Figure 16, we found that as a framework for machine learning, Neural Information Squeezer Plus can be regarded as a kind of CRL framework. Thus, with the NIS and NIS+, we effectively employ CRL to address the problem of identifying causal emergence (CE). Both CE and CRL frameworks feature an encoder and decoder designed to represent raw data as causal variables. Latent causal mechanisms can be learned in both frameworks. In this section, we compare the similarities and differences between these two emerging fields.

5.1.1. Similarities

It is interesting to compare the tasks of causal emergence identification within the theoretical framework of Erik Hoel and causal representation learning. Actually, the task of causal emergence identification can be understood as a task of causal representation learning, where the macro-variables are the causal variables, the macro-dynamics are the causal mechanisms, and the coarse-graining strategy is an encoding process that transforms the original data into representations. Within this framework, EI can be understood as a measure of the strength of the causal effect on the mechanism. To identify causal emergence, we need to learn the appropriate causal variables to represent data and discover causal mechanisms at the macro-level (in the representational latent space) to ensure that the EI of the learned causal mechanism in the latent space is larger than that of the original data. Table 1 shows a detailed comparison of causal emergence identification and causal representation learning.
With these similarities, the technologies and concepts from both fields can learn from each other. For example, the techniques of causal representation learning can be applied to discover causal emergence.
On the other hand, the learned abstract causal representations can be viewed as macro-states, which enhances the explainability of causal representation learning. This perspective suggests that CRL essentially uncovers hidden causal emergent features within the data.
Moreover, these similarities between emergent phenomena and CRL contribute to a deeper understanding of emergence itself. By applying CRL frameworks to data generated by dynamical systems exhibiting emergent phenomena, we can extract more profound causal structures. These profound causal structures may serve as the origins of the mysteries surrounding emergence. By delving into the depths of CRL and uncovering the hidden causal relationships within complex systems, we can gain insights into the mechanisms behind emergent phenomena. These causal structures may provide a foundation for understanding the emergence of novel properties and behaviors that arise from the interactions of simpler components.

5.1.2. Differences

However, there are several theoretical differences between CRL and causal emergence. The biggest difference is that CRL assumes that there is a real causal mechanism behind the data, and the data are generated by this causal mechanism [33]. In contrast, for causal emergence identification, the emergent variables and mechanisms at the macro-level are just handy ways to observe and understand data, and “real causality” may not exist. The differences are illustrated in Figure 17.
However, if an epistemological perspective is adopted, this difference disappears because both approaches extract meaningful information from observational data to obtain representations with stronger causal effects.
Another major difference is that the causal mechanism for macro-states in causal emergence is a dynamical system. Therefore, if the dynamics in the macro-state space are on a network, circular structures may exist because the state variables will be iterated in the dynamical system. As a result, feedback and circular indirect interactions are allowed in such a model. However, the causal mechanism is always represented by a structural causal model in CRL. Although these models are always directed acyclic graphs, loops or circular structures are not allowed. However, these differences are not important because Markovian dynamics can always be converted to causal models, as mentioned in Section 2.1.
Finally, unknown co-founders are always ignored in causal emergence, whereas they play very important roles in causal structural models.
In summary, there are deep connections between causal emergence and CRL. On one hand, the machine learning framework for causal emergence identification exhibits similar structures to the frameworks for CRL. And even the concepts can establish one-to-one correspondence, as shown in Table 1. Therefore, the ideas and techniques of both sides can learn from each other.
On the other hand, studies of causal emergence can provide new perspectives and insights for causal representation learning algorithms. For example, we can interpret the learned representations and causal mechanisms as emergent causal laws. The maximization of EI may improve the efficiency of learning the causalities for both CRL and reinforcement learning agents.

5.2. Ontological and Epistemological Causality and Emergence

Although machine learning techniques have facilitated the learning of causal structures and models, as well as the exploration of emergent properties and causation, it is important to consider whether the results obtained through machine learning reflect ontological causality and emergence or if they are merely epistemological phenomena.
Here, ontological causality refers to the causal relationships that exist in objective reality, independent of our knowledge or understanding. Ontological causality explores the fundamental mechanisms and interactions that give rise to causal effects in the world. Similarly, ontological emergence is concerned with the objective existence of emergent properties and their underlying mechanisms.
Epistemological causality, on the other hand, focuses on our knowledge and understanding of causality. It deals with how we perceive, model, and explain causal relationships based on our observations and experiences, but the causality may not exist in the real world. Epistemological emergence, akin to epistemological causality, focuses on our comprehension and elucidation of emergent phenomena. In other words, epistemological causality and emergence depend on an observer, and different observers may have distinct perspectives about causality and emergence for a particular objective phenomenon.
There has been a longstanding debate regarding the ontological and epistemological aspects of causality and emergence throughout history [17,22,68,76]. The authors of [68] highlighted that the concept of “causation” in the literature is often vague and should be differentiated into “cause” and “reason”, aligning with ontological and epistemological causality. “Cause” means genuine cause that sufficiently causes the effects (causal closure principle [141] and exclusion principle [142]), whereas “reason” serves as a mere explanation for individuals to comprehend the effects. Reason may not possess the same level of rigor as genuine cause, but it does offer a certain degree of predictability that individuals find valuable in certain circumstances.
However, distinguishing between them, particularly when addressing specific phenomena, remains a challenging task. One such controversial concept is downward causation, which has sparked extensive discussions. The question of whether downward causation exists objectively or not remains open. In [24], Rosas argued that downward causation not only exists independently of particular observers but also provides a characterization of the phenomenon through quantified measures using information decomposition. However, Yurchenko proposed that it is important to differentiate between two separate concepts: causality and reasoning. According to this perspective, downward causation falls under the category of “reason” rather than causation. In this context, causality and reasoning can be seen as representing ontological and epistemological causality, respectively.
Similarly, debates persist regarding the nature of causal emergence. The question arises as to whether causal emergence is a genuine phenomenon that exists independently of particular observers [77]. In Section 3.5.4, we highlighted that different coarse-graining strategies can lead to distinct macro-dynamic mechanisms with varying measures of EI. In essence, different coarse-graining strategies can be seen as representing different observers. However, Hoel’s theory proposes a criterion to differentiate between coarse-graining methods, namely EI maximization. Consequently, for a given set of Markovian dynamics, only the coarse-graining strategy and corresponding macro-dynamics that maximize the EI measure can be considered as an objective outcome. Nevertheless, a challenge arises when multiple solutions exist that maximize EI, introducing a degree of subjectivity [77]. The authors of [31,32] addressed this issue by introducing constraints on prediction errors in micro-states, which partially alleviates the problem. However, the question of whether optimal results have unique solutions remains an open problem that requires further investigation.
Similarly, although the incorporation of machine learning cannot resolve debates surrounding ontological and epistemological causality and emergence, it can provide objective standards that help mitigate subjectivity. This is because machine learning algorithms strive to optimize objective functions. Thus, machine learning agents can be viewed as “objective” observers that make judgments on causality and emergence. This represents an additional advantage of incorporating machine learning. However, the issue of unique solutions is crucial in this approach and warrants further attention.
Are the machine learning results ontological or epistemological? The answer is that the results are epistemological and depend on machine learning algorithms. However, this does not mean that all the results of machine learning are meaningless since if the learning agents are well trained and the defined mathematical objectives are effectively optimized, the results can also be considered objective and independent of the algorithms.
It is intriguing to pose the same question regarding human cognition. Do all cognitive results in the human brain, encompassing causality, coarse-graining strategies, and macro-dynamics, reflect objective reality, or are they purely epistemological? Firstly, answering this question is exceedingly challenging. Secondly, the cognitive results specific to a particular human brain must objectively manifest within the neural network structures. Thus, for that brain, these results are ontological. Similar reasons hold for machine learning. The cognitive results of machine learning algorithms are recorded by the structures and parameters in neural networks. Therefore, studying these structures may reveal the properties of the original systems, and the nature of the interactions between the observer and the observed objects can be objectively reflected by machines.
Furthermore, the integration of machine learning can assist in establishing a theoretical framework for modeling observers and studying the interactions between the observer and the corresponding observed complex systems. This framework allows us not only to explore the hard problems regarding causality and emergence but also to understand the limitations and boundaries of the observer. One example of this research is the emergent classicality of quantum systems from the information bottleneck formed by machine learning algorithms [143].

5.3. Potential Applications in Complex Systems

In Section 4, we discussed the techniques related to causal emergence identification, machine learning, and causal inference. In this sub-section, we discuss the potential applications of these techniques in complex systems. Complex systems can be understood as a large network, with events connected by causal links. Automatically discovering the complex causal relationships from data is a challenging problem.
The research area of causal discovery has tried to address this problem in various ways [144,145,146,147,148,149]. However, additional challenges may arise when applying these causal discovery methods in complex systems because circular causal structures and cross-level causation may exist.
Circular causal structures widely exist in complex dynamical systems due to the feedback effect. That is, one variable may affect itself via a direct self-loop feedback or a long chain forming a circle. This circular causal structure may pose a challenge to existing methods of causality because most of these methods study directed acyclic causal graphs. However, recent progress has seen the development of some methods for discovering these circular structures in a data-driven manner, e.g., [46,47,48,49,150,151,152,153].
Another aspect is that higher-level or cross-level causation may exist if the ingredients of scale and coarse-graining are considered. For example, downward causation describes the causal effect between macro-level and micro-level variables. Thus, the causal connections may be cross-level. If causal emergence occurs in a complex system, some strong causality may be found between macro-variables. All of these factors in complex systems present new challenges. The machine learning methods discussed in Section 4.1.1 can address the question of whether higher-level causality exists at the macro-level. However, it is important to note that this is a global property. It must be further developed and extended to find a local causal relationship between macro-variables or macro-variables and micro-variables [98]. Moreover, existing methods of causal discovery must be extended to consider the method of grouping a set of variables or coarse-graining a system. For example, ref. [154] proposed a coarse-to-fine causal discovery algorithm based on Granger causality and a graph neural network. The grouping process of variables is from coarse to fine, which can improve the efficiency of the algorithm. Other multi-level causal discovery methods were proposed and discussed in [155,156,157].
Another interesting problem is emergence detection. In complex dynamical systems, various higher-level patterns, such as waves, periodic oscillations, and solitons, are ubiquitous. For instance, in the climate system, typhoons and tornadoes are emergent vortex structures. In urban areas, traffic jams also emerge as a result of interactions among a large number of cars. Social riots are another example of emergent events in higher levels of human society. Identifying these emergent patterns at an early stage is crucial and significant [81,158,159,160,161,162,162]. Therefore, there is an urgent need for a method that can automatically detect these emergent patterns and even provide early warning signals.
In this review, we mainly focus on the emergence of causality; however, we are not limited to this specific type of emergence. More macroscopic or global-level properties could also be emergent, although causal emergence may be the most important one. For example, in [163], the authors discussed the concept of emergent information closure, which refers to the idea that the encoded information within an agent can form a closed system that is not influenced by the external world. The authors then argued that this agent with information closure can be regarded as a kind of consciousness. Symmetry may be another interesting property within some complex systems, and it may be emergent. For example, any single trajectory of a large number of random walkers in a two-dimensional Euclidean space is random. Nevertheless, the Gaussian distribution surface describing how the number of walkers falling in each small region changes with different locations is isotropic. This kind of symmetry can only be found at the macro-level and, thus, it is emergent. It would be interesting and useful to deploy a method for automatically discovering this kind of emergent symmetry. The framework for the NIS can be extended to address the problem of emergent symmetry identification if the optimized objective, EI, is replaced with a measure of isotope or more general symmetry [164]. Similarly, we can also find other emergent symmetries using this framework once their measurement and optimization are possible.
However, it is crucial to acknowledge the inherent limitations of applying machine learning techniques to causal discovery and emergence identification problems. For instance, in [45], the authors highlighted the existence of ’statistically equivalent’ but causally distinct DAGs, implying that different causal structures can be constructed to account for the same dataset. Consequently, when utilizing machine learning techniques to uncover causality and emergent properties, similar challenges need to be addressed. Further research in this direction warrants significant attention to overcome these obstacles.

5.4. Understanding Complex Systems from Causal Emergence

A profound understanding of causal emergence and emergent causality can provide insights into understanding various mysterious phenomena in complex systems, including living systems, social systems, climate systems, ecosystems, and more. Significantly, fundamental questions, such as free will [68], consciousness [165], and life, are all intricately connected to emergent causation [166]. For instance, free will can be viewed as an emergent form of downward causation [68]. Social phenomena can be comprehended through the lens of causal emergence [167]. Interestingly, both the EI and ϕ ID frameworks for causal emergence have connections with one of the theories of consciousness, namely the integrated information theory [168]. However, understanding these abstract concepts and phenomena in the context of causal emergence is merely an initial step.
For a specific system, there are three problems that should be addressed in future studies: (1) When does causal emergence occur? (2) How does emergent causality have functional effects on the system? (3) How does emergent causality change when the system is changed to adapt to the environment? We discuss these problems one by one.
We still do not know when causal emergence will occur and how the measure of causal emergence changes with some key parameters of a system. In [32], the authors showed how the measure of causal emergence changes with different noises in the Boid model (see the relevant discussion in Section 4.1.2). It is reasonable to expect that there is a phase transition of emergent causality within a complex system when some key parameter changes because the causality or strength of the causal effect is also a global property, and it may be dependent on some order parameters.
If causal emergence occurs in the system, how does the emergent causation affect the parts and the whole of the system? For example, the emergent “I” can be understood as an emergent macro-variable [68]. How does “I” influence the part, say the foot, to move? This problem is non-trivial because it relates to the problem of mind–body interaction. It is important to study the information flows at both the macro-level and micro-level together to understand this phenomenon.
Finally, what is the relationship between adaptation and emergence [4,169,170]? Sometimes, when we refer to a property as emergent, we essentially mean that this property can be developed through adaptation. Therefore, adaptation or evolution serves as a causal force for certain emergent properties. This concept also applies to emergent causality. For instance, downward causation, which is commonly found in complex systems, emerges as a result of adaptation and evolution. The next crucial question is, how can we evolve an emergent property or causation? This problem resembles the issue of designing emergence [171,172,173]. However, our aim here is to seek an explanation rather than a design. We want to understand the specific environment and manner in which the observed emergent causation can be evolved through adaptation.
All these problems require further studies in the future. However, these problems are just a few of the more interesting problems that need to be addressed.

Author Contributions

Conceptualization, J.Z.; methodology, J.Z., B.Y. and P.C.; writing, B.Y., J.Z., A.L., J.W., Z.W., M.Y., K.L. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by National Natural Science Foundation of China (No. 62141607, U1936219).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No data available.

Acknowledgments

This work is inspired by the insightful discussions held in the “Causal Emergence” series reading groups organized by Swarma Club and the “Swarma-Kaifeng” Workshop. We would like to extend our sincere gratitude to Professors Yizhuang You, Jing He, Chaochao Lu, and Yanbo Zhang for their invaluable contributions and insightful discussions. We express our appreciation for the support from the Save 2050 Programme, a joint initiative of Swarma Club and X-Order. Additionally, we are grateful for the support received from Swarma Research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Calculation of EI with Transition Probability Matrix

Calculating effective information ( E I ) requires knowledge of the joint probability distribution of the system. The state transition probability matrix (TPM) defines the probability of transitioning from one state of the system to another. Specifically, the j t h element of the i t h row in the TPM, denoted by T P M ( i , j ) , represents the probability of the system being in state s j at time t + 1 , given that it is in state s i at time t (i.e., P ( X t + 1 = s j | X t = s i ) ) .
T P M ( i , j ) = P ( X t + 1 = s j | X t = s i ) = P ( X t + 1 = s j , X t = s i ) P ( X t = s i )
Assuming we can “force” the system to conform to a maximum entropy distribution (i.e., uniform distribution) with N states, the probability of being in any given state i at time t, denoted by P u ( X t = s i ) , is 1 N .
As a result, the TPM can be expressed as the product of the joint probability of the system’s maximum entropy distribution, denoted by P u ( X t + 1 = s j , X t = s i ) , and the total number of states, which is N, as demonstrated below.
T P M ( i , j ) = P u ( X t + 1 = s j , X t = s i ) P u ( X t = s i ) = P u ( X t + 1 = s j , X t = s i ) 1 / N = N × P u ( X t + 1 = s j , X t = s i )
Hence, the TPM of the system can be used to express both the joint probability and marginal probability at time t + 1 of the system’s maximum entropy distribution.
P u ( X t + 1 = s j , X t = s i ) = 1 N T P M ( i , j )
P u ( X t + 1 = s j ) = i P u ( X t + 1 = s j , X t = s i ) = 1 N i T P M ( i , j )
Using this relationship, E I can be calculated directly with the TPM.
E I = I ( I D ; E D ) = I ( X t ; X t + 1 ) | d o ( X t ) U = i , j P u ( X t + 1 = s j , X t = s i ) log 2 P u ( X t + 1 = s j , X t = s i ) P u ( X t + 1 = s j ) P u ( X t = s i ) = i , j 1 N T P M ( i , j ) log 2 1 N T P M ( i , j ) 1 N k T P M ( k , j ) × 1 N = 1 N i , j T P M ( i , j ) log 2 N × T P M ( i , j ) k T P M ( k , j )

References

  1. Ledford, H. How to solve the world’s biggest problems. Nature 2015, 525, 308–311. [Google Scholar] [CrossRef] [PubMed]
  2. Mensah, P.; Katerere, D.; Hachigonta, S.; Roodt, A. Systems Analysis Approach for Complex Global Challenges; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  3. Bar-Yam, Y. General features of complex systems. In Encyclopedia of Life Support Systems (EOLSS); UNESCO, EOLSS Publishers: Oxford, UK, 2002; Volume 1. [Google Scholar]
  4. Holland, J.H. Emergence: From Chaos to Order; OUP: Oxford, UK, 2000. [Google Scholar]
  5. Artime, O.; De Domenico, M. From the origin of life to pandemics: Emergent phenomena in complex systems. Philos. Trans. R. Soc. A 2022, 380, 20200410. [Google Scholar] [CrossRef] [PubMed]
  6. Lagercrantz, H.; Changeux, J.P. The emergence of human consciousness: From fetal to neonatal life. Pediatr. Res. 2009, 65, 255–260. [Google Scholar] [CrossRef] [PubMed]
  7. Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; et al. Emergent abilities of large language models. arXiv 2022, arXiv:2206.07682. [Google Scholar]
  8. Anderson, P.W. More is different: Broken symmetry and the nature of the hierarchical structure of science. Science 1972, 177, 393–396. [Google Scholar] [CrossRef] [PubMed]
  9. Meehl, P.E.; Sellars, W. The concept of emergence. Minn. Stud. Philos. Sci. 1956, 1, 239–252. [Google Scholar]
  10. Bedau, M.A. Weak emergence. Philos. Perspect. 1997, 11, 375–399. [Google Scholar] [CrossRef]
  11. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47. [Google Scholar] [CrossRef]
  12. Wikipedia. Butterfly Effect—Wikipedia, The Free Encyclopedia. 2023. Available online: https://en.wikipedia.org/wiki/Butterfly_effect (accessed on 27 September 2023).
  13. Wikipedia. Homeostasis—Wikipedia, The Free Encyclopedia. 2023. Available online: https://en.wikipedia.org/wiki/Homeostasis (accessed on 4 October 2023).
  14. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econom. J. Econom. Soc. 1969, 37, 424–438. [Google Scholar] [CrossRef]
  15. Pearl, J. Models, Reasoning and Inference; Cambridge University Press: Cambridge, UK, 2000; Volume 19. [Google Scholar]
  16. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  17. Kim, J. ‘Downward causation’ in emergentism and nonreductive physicalism. In Emergence or Reduction; Walter de Gruyter: Berlin, Germany, 1992; pp. 119–138. [Google Scholar]
  18. Fromm, J. Types and forms of emergence. arXiv 2005, arXiv:nlin/0506028. [Google Scholar]
  19. Hoel, E.P.; Albantakis, L.; Tononi, G. Quantifying causal emergence shows that macro can beat micro. Proc. Natl. Acad. Sci. USA 2013, 110, 19790–19795. [Google Scholar] [CrossRef] [PubMed]
  20. Ellis, G.F. Efficient, formal, material, and final causes in biology and technology. Entropy 2023, 25, 1301. [Google Scholar] [CrossRef] [PubMed]
  21. Oizumi, M.; Albantakis, L.; Tononi, G. From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Comput. Biol. 2014, 10, e1003588. [Google Scholar] [CrossRef] [PubMed]
  22. Crutchfield, J.P.; Young, K. Inferring statistical complexity. Phys. Rev. Lett. 1989, 63, 105. [Google Scholar] [CrossRef] [PubMed]
  23. Seth, A.K. Measuring emergence via nonlinear Granger causality. alife 2008, 2008, 545–552. [Google Scholar]
  24. Rosas, F.E.; Mediano, P.A.; Jensen, H.J.; Seth, A.K.; Barrett, A.B.; Carhart-Harris, R.L.; Bor, D. Reconciling emergences: An information-theoretic approach to identify causal emergence in multivariate data. PLoS Comput. Biol. 2020, 16, e1008289. [Google Scholar] [CrossRef]
  25. Hoel, E.P. When the Map Is Better Than the Territory. Entropy 2017, 19, 188. [Google Scholar] [CrossRef]
  26. Chvykov, P.; Hoel, E. Causal geometry. Entropy 2020, 23, 24. [Google Scholar] [CrossRef]
  27. Klein, B.; Hoel, E. The emergence of informative higher scales in complex networks. Complexity 2020, 2020, 8932526. [Google Scholar] [CrossRef]
  28. Comolatti, R.; Hoel, E. Causal emergence is widespread across measures of causation. arXiv 2022, arXiv:2202.01854. [Google Scholar]
  29. Williams, P.L.; Beer, R.D. Nonnegative decomposition of multivariate information. arXiv 2010, arXiv:1004.2515. [Google Scholar]
  30. Mediano, P.A.; Rosas, F.; Carhart-Harris, R.L.; Seth, A.K.; Barrett, A.B. Beyond integrated information: A taxonomy of information dynamics phenomena. arXiv 2019, arXiv:1909.02297. [Google Scholar]
  31. Zhang, J.; Liu, K. Neural information squeezer for causal emergence. Entropy 2022, 25, 26. [Google Scholar] [CrossRef]
  32. Yang, M.; Wang, Z.; Liu, K.; Rong, Y.; Yuan, B.; Zhang, J. Finding emergence in data: Causal emergence inspired dynamics learning. arXiv 2023, arXiv:2308.09952. [Google Scholar]
  33. Schölkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward causal representation learning. Proc. IEEE 2021, 109, 612–634. [Google Scholar] [CrossRef]
  34. Iwasaki, Y.; Simon, H.A. Causality and model abstraction. Artif. Intell. 1994, 67, 143–194. [Google Scholar] [CrossRef]
  35. Sucar, L.E. Probabilistic Graphical Models; Advances in Computer Vision and Pattern Recognition; Springer: London, UK, 2015; Volume 10, p. 1. [Google Scholar]
  36. Rubin, D.B. Causal inference using potential outcomes: Design, modeling, decisions. J. Am. Stat. Assoc. 2005, 100, 322–331. [Google Scholar] [CrossRef]
  37. Malinsky, D.; Danks, D. Causal discovery algorithms: A practical guide. Philos. Compass 2018, 13, e12470. [Google Scholar] [CrossRef]
  38. Spirtes, P.; Glymour, C.; Scheines, R. Causation Prediction and Search, 2nd ed.; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  39. Chickering, D.M. Learning equivalence classes of Bayesian-network structures. J. Mach. Learn. Res. 2002, 2, 445–498. [Google Scholar]
  40. Hume, D. An enquiry concerning human understanding. In Seven Masterpieces of Philosophy; Routledge: Abingdon, UK, 2016; pp. 183–276. [Google Scholar]
  41. Eells, E. Probabilistic Causality; Cambridge University Press: Cambridge, UK, 1991; Volume 1. [Google Scholar]
  42. Suppes, P. A probabilistic theory of causality. Br. J. Philos. Sci. 1973, 24, 409–410. [Google Scholar]
  43. Hoel, E.; Levin, M. Emergence of informative higher scales in biological systems: A computational toolkit for optimal prediction and control. Commun. Integr. Biol. 2020, 13, 108–118. [Google Scholar] [CrossRef]
  44. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  45. Textor, J.; Van der Zander, B.; Gilthorpe, M.S.; Liśkiewicz, M.; Ellison, G.T. Robust causal inference using directed acyclic graphs: The R package ‘dagitty’. Int. J. Epidemiol. 2016, 45, 1887–1894. [Google Scholar] [CrossRef] [PubMed]
  46. Richardson, T.S. A discovery algorithm for directed cyclic graphs. arXiv 2013, arXiv:1302.3599. [Google Scholar]
  47. Spirtes, P.L. Directed cyclic graphical representations of feedback models. arXiv 2013, arXiv:1302.4982. [Google Scholar]
  48. Lacerda, G.; Spirtes, P.L.; Ramsey, J.; Hoyer, P.O. Discovering cyclic causal models by independent components analysis. arXiv 2012, arXiv:1206.3273. [Google Scholar]
  49. Forré, P.; Mooij, J.M. Constraint-based causal discovery for non-linear structural causal models with cycles and latent confounders. arXiv 2018, arXiv:1807.03024. [Google Scholar]
  50. Ellison, G.T. Using Directed Acyclic Graphs (Dags) to Represent the Data Generating Mechanisms of Disease and Healthcare Pathways: A Guide for Educators, Students, Practitioners and Researchers. In Teaching Biostatistics in Medicine and Allied Health Sciences; Springer: Berlin/Heidelberg, Germany, 2023; pp. 61–101. [Google Scholar]
  51. Gong, C.; Yao, D.; Zhang, C.; Li, W.; Bi, J. Causal Discovery from Temporal Data: An Overview and New Perspectives. arXiv 2023, arXiv:2303.10112. [Google Scholar]
  52. Pepper, S.C. Emergence. J. Philos. 1926, 23, 241–245. [Google Scholar] [CrossRef]
  53. Winning, J.; Bechtel, W. Complexity, control and goal-directedness in biological systems. In The Routledge Handbook of Emergence; Routledge: Abingdon, UK, 2019; p. 134. [Google Scholar]
  54. Hendry, R.F. Substance and structure. In The Routledge Handbook of Emergence; Routledge: Abingdon, UK, 2019; p. 339. [Google Scholar]
  55. Huxley, T.H. Evolution and Ethics 1893–1943; Pilot Press: King’s Lynn, UK, 1947. [Google Scholar]
  56. Mill, J.S. Of the Composition of Causes. In A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation; Cambridge Library Collection—Philosophy; Cambridge University Press: Cambridge, UK, 2011; Volume 1, pp. 425–436. [Google Scholar] [CrossRef]
  57. Gibb, S.; Hendry, R.F.; Lancaster, T. The Routledge Handbook of Emergence; Routledge: Abingdon, UK, 2019. [Google Scholar]
  58. Ross, W.D.; Smith, J.A. The Works of Aristotle: Metaphysica, by WD Ross; Clarendon Press: Cambridge, UK, 1908; Volume 8. [Google Scholar]
  59. Holland, J.H. Hidden Order: How Adaptation Builds Complexity; Addison Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1996. [Google Scholar]
  60. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 27–31 July 1987; pp. 25–34. [Google Scholar]
  61. Gardner, M. The Fantastic Combinations of Jhon Conway’s New Solitaire Game’Life. Sc. Am. 1970, 223, 20–123. [Google Scholar]
  62. Luhmann, N. Social Systems; Stanford University Press: Redwood City, CA, USA, 1995. [Google Scholar]
  63. Crutchfield, J.P. The calculi of emergence: Computation, dynamics and induction. Phys. D Nonlinear Phenom. 1994, 75, 11–54. [Google Scholar] [CrossRef]
  64. Bedau, M.A.; Humphreys, P. Emergence: Contemporary Readings in Philosophy and Science; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
  65. Bedau, M. Downward causation and the autonomy of weak emergence. Principia Int. J. Epistemol. 2002, 6, 5–50. [Google Scholar]
  66. Wolfram, S. A New Kind of Science; Wolfram Media Champaign: Champaign, IL, USA, 2002; Volume 5. [Google Scholar]
  67. Merton, R.K. The Sociology of Science: Theoretical and Empirical Investigations; University of Chicago Press: Chicago, IL, USA, 1973. [Google Scholar]
  68. Yurchenko, S.B. Can there be a synergistic core emerging in the brain hierarchy to control neural activity by downward causation? TechRxiv 2023. [Google Scholar] [CrossRef]
  69. Harré, R. The Philosophies of Science; Oxford University Press: New York, NY, USA, 1985. [Google Scholar]
  70. Baas, N.A. Emergence, hierarchies, and hyperstructures. In Artificial Life III, SFI Studies in the Science of Complexity, XVII; Routledge: Abingdon, UK, 1994; pp. 515–537. [Google Scholar]
  71. Newman, D.V. Emergence and strange attractors. Philos. Sci. 1996, 63, 245–261. [Google Scholar] [CrossRef]
  72. O’Connor, T. Emergent properties. Am. Philos. Q. 1994, 31, 91–104. [Google Scholar]
  73. Héder, M.; Paksi, D. A Criticism of Weak Emergence. Polanyiana 2019, 28, 1–2. [Google Scholar]
  74. Jackson, F.; Pettit, P. In defense of explanatory ecumenism. Econ. Philos. 1992, 8, 1–21. [Google Scholar] [CrossRef]
  75. Kim, J. Making sense of emergence. Philos. Stud. Int. J. Philos. Anal. Tradit. 1999, 95, 3–36. [Google Scholar]
  76. Bonabeau, E.; Dessalles, J.L. Detection and emergence. arXiv 2011, arXiv:1108.4279. [Google Scholar] [CrossRef]
  77. Dewhurst, J. Causal emergence from effective information: Neither causal nor emergent? Thought J. Philos. 2021, 10, 158–168. [Google Scholar] [CrossRef]
  78. Eberhardt, F.; Lee, L.L. Causal emergence: When distortions in a map obscure the territory. Philosophies 2022, 7, 30. [Google Scholar] [CrossRef]
  79. Shalizi, C.R. Causal Architecture, Complexity and Self-Organization in Time Series and Cellular Automata; The University of Wisconsin-Madison: Madison, WI, USA, 2001. [Google Scholar]
  80. Shalizi, C.R.; Moore, C. What is a macrostate? Subjective observations and objective dynamics. arXiv 2003, arXiv:condmat/0303625. [Google Scholar]
  81. Mnif, M.; Müller-Schloer, C. Quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 39–52. [Google Scholar]
  82. Tang, M.; Mao, X. Information entropy-based metrics for measuring emergences in artificial societies. Entropy 2014, 16, 4583–4602. [Google Scholar] [CrossRef]
  83. Fisch, D.; Jänicke, M.; Sick, B.; Müller-Schloer, C. Quantitative emergence–A refined approach based on divergence measures. In Proceedings of the 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems, Budapest, Hungary, 27 September–1 October 2010; IEEE Computer Society: Washington, DC, USA, 2010; pp. 94–103. [Google Scholar]
  84. Fisch, D.; Fisch, D.; Jänicke, M.; Kalkowski, E.; Sick, B. Techniques for knowledge acquisition in dynamically changing environments. ACM Trans. Auton. Adapt. Syst. (TAAS) 2012, 7, 1–25. [Google Scholar] [CrossRef]
  85. Fisch, D.; Jänicke, M.; Müller-Schloer, C.; Sick, B. Divergence measures as a generalised approach to quantitative emergence. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 53–66. [Google Scholar]
  86. Holzer, R.; De Meer, H.; Bettstetter, C. On autonomy and emergence in self-organizing systems. In International Workshop on Self-Organizing Systems, Proceedings of the Third International Workshop, IWSOS 2008, Vienna, Austria, 10–12 December 2008; Springer: Berlin/Heidelberg, Germany, 2008; pp. 157–169. [Google Scholar]
  87. Holzer, R.; de Meer, H. Methods for approximations of quantitative measures in self-organizing systems. In Proceedings of the Self-Organizing Systems: 5th International Workshop, IWSOS 2011, Karlsruhe, Germany, 23–24 February 2011; Proceedings 5. Springer: Berlin/Heidelberg, Germany, 2011; pp. 1–15. [Google Scholar]
  88. Teo, Y.M.; Luong, B.L.; Szabo, C. Formalization of emergence in multi-agent systems. In Proceedings of the 1st ACM SIGSIM Conference on Principles of Advanced Discrete Simulation, Montreal, QC, Canada, 19–22 May 2013; pp. 231–240. [Google Scholar]
  89. Szabo, C.; Teo, Y.M. Formalization of weak emergence in multiagent systems. ACM Trans. Model. Comput. Simul. (TOMACS) 2015, 26, 1–25. [Google Scholar] [CrossRef]
  90. Christensen, K.; Moloney, N.R. Complexity and Criticality; World Scientific Publishing Company: Singapore, 2005; Volume 1. [Google Scholar]
  91. McComb, W.D. Renormalization Methods: A Guide for Beginners; OUP Oxford: Cambridge, UK, 2003. [Google Scholar]
  92. Kemeny, J.; Snell, J. Finite Markov Chains: With a New Appendix “Generalization of a Fundamental Matrix”; Undergraduate Texts in Mathematics; Springer: New York, NY, USA, 1983. [Google Scholar]
  93. Pfante, O.; Bertschinger, N.; Olbrich, E.; Ay, N.; Jost, J. Comparison between different methods of level identification. Adv. Complex Syst. 2014, 17, 1450007. [Google Scholar] [CrossRef]
  94. Kotsalis, G.; Megretski, A.; Dahleh, M.A. Balanced truncation for a class of stochastic jump linear systems and model reduction for hidden Markov models. IEEE Trans. Autom. Control 2008, 53, 2543–2557. [Google Scholar] [CrossRef]
  95. White, L.B.; Mahony, R.; Brushe, G.D. Lumpable hidden Markov models-model reduction and reduced complexity filtering. IEEE Trans. Autom. Control 2000, 45, 2297–2306. [Google Scholar] [CrossRef]
  96. Wolpert, D.H.; Grochow, J.A.; Libby, E.; DeDeo, S. Optimal high-level descriptions of dynamical systems. arXiv 2014, arXiv:1409.7403. [Google Scholar]
  97. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  98. Varley, T.F. Flickering emergences: The question of locality in information-theoretic approaches to emergence. Entropy 2022, 25, 54. [Google Scholar] [CrossRef]
  99. Cheng, P.W.; Novick, L.R. Causes versus enabling conditions. Cognition 1991, 40, 83–120. [Google Scholar] [CrossRef] [PubMed]
  100. Varley, T.F. Causal emergence in discrete and continuous dynamical systems. arXiv 2020, arXiv:2003.13075. [Google Scholar]
  101. Marrow, S.; Michaud, E.J.; Hoel, E. Examining the Causal Structures of Deep Neural Networks Using Information Theory. Entropy 2020, 22, 1429. [Google Scholar] [CrossRef] [PubMed]
  102. Griebenow, R.; Klein, B.; Hoel, E. Finding the right scale of a network: Efficient identification of causal emergence through spectral clustering. arXiv 2019, arXiv:1908.07565. [Google Scholar]
  103. Klein, B.; Swain, A.; Byrum, T.; Scarpino, S.V.; Fagan, W.F. Exploring noise, degeneracy and determinism in biological networks with the einet package. Methods Ecol. Evol. 2022, 13, 799–804. [Google Scholar] [CrossRef]
  104. Klein, B.; Hoel, E.; Swain, A.; Griebenow, R.; Levin, M. Evolution and emergence: Higher order information structure in protein interactomes across the tree of life. Integr. Biol. 2021, 13, 283–294. [Google Scholar] [CrossRef] [PubMed]
  105. Swain, A.; Williams, S.D.; Di Felice, L.J.; Hobson, E.A. Interactions and information: Exploring task allocation in ant colonies using network analysis. Anim. Behav. 2022, 189, 69–81. [Google Scholar] [CrossRef]
  106. Luppi, A.I.; Mediano, P.A.; Rosas, F.E.; Holland, N.; Fryer, T.D.; O’Brien, J.T.; Rowe, J.B.; Menon, D.K.; Bor, D.; Stamatakis, E.A. A synergistic core for human brain evolution and cognition. Nat. Neurosci. 2022, 25, 771–782. [Google Scholar] [CrossRef]
  107. Varley, T.F.; Hoel, E. Emergence as the conversion of information: A unifying theory. Philos. Trans. R. Soc. A 2022, 380, 20210150. [Google Scholar] [CrossRef]
  108. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; Van Den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  109. Dabney, W.; Kurth-Nelson, Z.; Uchida, N.; Starkweather, C.K.; Hassabis, D.; Munos, R.; Botvinick, M. A distributional code for value in dopamine-based reinforcement learning. Nature 2020, 577, 671–675. [Google Scholar] [CrossRef] [PubMed]
  110. Senior, A.W.; Evans, R.; Jumper, J.; Kirkpatrick, J.; Sifre, L.; Green, T.; Qin, C.; Žídek, A.; Nelson, A.W.; Bridgland, A.; et al. Improved protein structure prediction using potentials from deep learning. Nature 2020, 577, 706–710. [Google Scholar] [CrossRef] [PubMed]
  111. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  112. Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; Liu, W. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5265–5274. [Google Scholar]
  113. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
  114. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  115. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017. [Google Scholar]
  116. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  117. Fan, L.; Li, L.; Ma, Z.; Lee, S.; Yu, H.; Hemphill, L. A bibliometric review of large language models research from 2017 to 2023. arXiv 2023, arXiv:2304.02020. [Google Scholar]
  118. Cui, P.; Athey, S. Stable learning establishes some common ground between causal inference and machine learning. Nat. Mach. Intell. 2022, 4, 110–115. [Google Scholar] [CrossRef]
  119. Kaddour, J.; Lynch, A.; Liu, Q.; Kusner, M.J.; Silva, R. Causal machine learning: A survey and open problems. arXiv 2022, arXiv:2206.15475. [Google Scholar]
  120. Peters, J.; Bühlmann, P.; Meinshausen, N. Causal inference by using invariant prediction: Identification and confidence intervals. J. R. Stat. Soc. Ser. B Stat. Methodol. 2016, 78, 947–1012. [Google Scholar] [CrossRef]
  121. Kuang, K.; Cui, P.; Athey, S.; Xiong, R.; Li, B. Stable prediction across unknown environments. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 1617–1626. [Google Scholar]
  122. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
  123. Teshima, T.; Ishikawa, I.; Tojo, K.; Oono, K.; Ikeda, M.; Sugiyama, M. Coupling-based invertible neural networks are universal diffeomorphism approximators. Adv. Neural Inf. Process. Syst. 2020, 33, 3362–3373. [Google Scholar]
  124. Rosenblatt, M. Remarks on some nonparametric estimates of a density function. Ann. Math. Stat. 1956, 27, 832–837. [Google Scholar] [CrossRef]
  125. Snoek, L.; van der Miesen, M.M.; Beemsterboer, T.; Van Der Leij, A.; Eigenhuis, A.; Steven Scholte, H. The Amsterdam Open MRI Collection, a set of multimodal MRI datasets for individual difference analyses. Sci. Data 2021, 8, 85. [Google Scholar] [CrossRef] [PubMed]
  126. Arjovsky, M.; Bottou, L.; Gulrajani, I.; Lopez-Paz, D. Invariant risk minimization. arXiv 2019, arXiv:1907.02893. [Google Scholar]
  127. Ye, H.; Xie, C.; Cai, T.; Li, R.; Li, Z.; Wang, L. Towards a theoretical framework of out-of-distribution generalization. Adv. Neural Inf. Process. Syst. 2021, 34, 23519–23531. [Google Scholar]
  128. Huh, D.; Baidya, A. The Missing Invariance Principle found–the Reciprocal Twin of Invariant Risk Minimization. Adv. Neural Inf. Process. Syst. 2022, 35, 23023–23035. [Google Scholar]
  129. Zhao, H.; Des Combes, R.T.; Zhang, K.; Gordon, G. On learning invariant representations for domain adaptation. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 7523–7532. [Google Scholar]
  130. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 1–35. [Google Scholar]
  131. Wang, J.; Lan, C.; Liu, C.; Ouyang, Y.; Qin, T.; Lu, W.; Chen, Y.; Zeng, W.; Yu, P. Generalizing to unseen domains: A survey on domain generalization. IEEE Trans. Knowl. Data Eng. 2022, 35, 8052–8072. [Google Scholar] [CrossRef]
  132. Li, Y.; Gong, M.; Tian, X.; Liu, T.; Tao, D. Domain generalization via conditional invariant representations. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  133. Koyama, M.; Yamaguchi, S. Out-of-distribution generalization with maximal invariant predictor. arXiv 2020, arXiv:2008.01883. [Google Scholar]
  134. Ahuja, K.; Caballero, E.; Zhang, D.; Gagnon-Audet, J.C.; Bengio, Y.; Mitliagkas, I.; Rish, I. Invariance principle meets information bottleneck for out-of-distribution generalization. Adv. Neural Inf. Process. Syst. 2021, 34, 3438–3450. [Google Scholar]
  135. Kaur, J.N.; Kiciman, E.; Sharma, A. Modeling the data-generating process is necessary for out-of-distribution generalization. arXiv 2022, arXiv:2206.07837. [Google Scholar]
  136. Shen, Z.; Cui, P.; Zhang, T.; Kunag, K. Stable learning via sample reweighting. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 5692–5699. [Google Scholar]
  137. Shen, Z.; Cui, P.; Liu, J.; Zhang, T.; Li, B.; Chen, Z. Stable learning via differentiated variable decorrelation. In Proceedings of the 26th ACM Sigkdd International Conference on Knowledge Discovery & Data Mining, Virtual, 6–10 July 2020; pp. 2185–2193. [Google Scholar]
  138. Zhang, X.; Cui, P.; Xu, R.; Zhou, L.; He, Y.; Shen, Z. Deep stable learning for out-of-distribution generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 5372–5382. [Google Scholar]
  139. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed]
  140. Monti, R.P.; Zhang, K.; Hyvärinen, A. Causal discovery with general non-linear relationships using non-linear ica. In Proceedings of the Uncertainty in Artificial Intelligence, PMLR, Tel Aviv, Israel, 22–25 July 2019; pp. 186–195. [Google Scholar]
  141. Clayton, P.; Davies, P. The Re-Emergence of Emergence: The Emergentist Hypothes; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  142. Kim, J. Physicalism, or Something Near Enough; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  143. Zhang, Z.; You, Y.Z. Observing Schrödinger’s Cat with Artificial Intelligence: Emergent Classicality from Information Bottleneck. arXiv 2023, arXiv:2306.14838. [Google Scholar]
  144. Glymour, C.; Zhang, K.; Spirtes, P. Review of causal discovery methods based on graphical models. Front. Genet. 2019, 10, 524. [Google Scholar] [CrossRef] [PubMed]
  145. Zanga, A.; Ozkirimli, E.; Stella, F. A survey on causal discovery: Theory and practice. Int. J. Approx. Reason. 2022, 151, 101–129. [Google Scholar] [CrossRef]
  146. Donges, J.F.; Zou, Y.; Marwan, N.; Kurths, J. The backbone of the climate network. Europhys. Lett. 2009, 87, 48007. [Google Scholar] [CrossRef]
  147. Kalisch, M.; Bühlman, P. Estimating high-dimensional directed acyclic graphs with the PC-algorithm. J. Mach. Learn. Res. 2007, 8, 613–636. [Google Scholar]
  148. Zheng, X.; Aragam, B.; Ravikumar, P.K.; Xing, E.P. Dags with no tears: Continuous optimization for structure learning. Adv. Neural Inf. Process. Syst. 2018, 31, 9472–9483. [Google Scholar]
  149. Zhu, S.; Ng, I.; Chen, Z. Causal discovery with reinforcement learning. arXiv 2019, arXiv:1906.04477. [Google Scholar]
  150. Rantanen, K.; Hyttinen, A.; Järvisalo, M. Learning optimal cyclic causal graphs from interventional data. In Proceedings of the International Conference on Probabilistic Graphical Models, PMLR, Skørping, Denmark, 23–25 September 2020; pp. 365–376. [Google Scholar]
  151. Zhang, Z.; Zhao, Y.; Liu, J.; Wang, S.; Tao, R.; Xin, R.; Zhang, J. A general deep learning framework for network reconstruction and dynamics learning. Appl. Netw. Sci. 2019, 4, 110. [Google Scholar] [CrossRef]
  152. Pamfil, R.; Sriwattanaworachai, N.; Desai, S.; Pilgerstorfer, P.; Georgatzis, K.; Beaumont, P.; Aragam, B. Dynotears: Structure learning from time-series data. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Palermo, Italy, 26–28 August 2020; pp. 1595–1605. [Google Scholar]
  153. Vowels, M.J.; Camgoz, N.C.; Bowden, R. D’ya like dags? a survey on structure learning and causal discovery. ACM Comput. Surv. 2022, 55, 1–36. [Google Scholar] [CrossRef]
  154. Cheng, Y.; Li, L.; Xiao, T.; Li, Z.; Suo, J.; He, K.; Dai, Q. CUTS+: High-dimensional Causal Discovery from Irregular Time-series. arXiv 2023, arXiv:2305.05890. [Google Scholar]
  155. Wang, D.; Chen, Z.; Ni, J.; Tong, L.; Wang, Z.; Fu, Y.; Chen, H. Hierarchical graph neural networks for causal discovery and root cause localization. arXiv 2023, arXiv:2302.01987. [Google Scholar]
  156. Zhang, Q.; Zhang, C.; Cheng, S. Wavelet Multiscale Granger Causality Analysis Based on State Space Models. Symmetry 2023, 15, 1286. [Google Scholar] [CrossRef]
  157. Fan, C.; Wang, Y.; Zhang, Y.; Ouyang, W. Interpretable Multi-Scale Neural Network for Granger Causality Discovery. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  158. Haugen, R.A.; Skeie, N.O.; Muller, G.; Syverud, E. Detecting emergence in engineered systems: A literature review and synthesis approach. Syst. Eng. 2023, 26, 463–481. [Google Scholar] [CrossRef]
  159. O’toole, E.; Nallur, V.; Clarke, S. Decentralised detection of emergence in complex adaptive systems. ACM Trans. Auton. Adapt. Syst. (TAAS) 2017, 12, 1–31. [Google Scholar] [CrossRef]
  160. O’Toole, E.; Nallur, V.; Clarke, S. Towards decentralised detection of emergence in complex adaptive systems. In Proceedings of the 2014 IEEE Eighth International Conference on Self-Adaptive and Self-Organizing Systems, London, UK, 8–12 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 60–69. [Google Scholar]
  161. Pazho, A.D.; Noghre, G.A.; Purkayastha, A.A.; Vempati, J.; Martin, O.; Tabkhi, H. A Survey of Graph-based Deep Learning for Anomaly Detection in Distributed Systems. arXiv 2022, arXiv:2206.04149. [Google Scholar] [CrossRef]
  162. Liu, R.; Wu, Z.; Yu, S.; Lin, S. The emergence of objectness: Learning zero-shot segmentation from videos. Adv. Neural Inf. Process. Syst. 2021, 34, 13137–13152. [Google Scholar]
  163. Chang, A.Y.; Biehl, M.; Yu, Y.; Kanai, R. Information closure theory of consciousness. Front. Psychol. 2020, 11, 1504. [Google Scholar] [CrossRef]
  164. Liu, Z.; Tegmark, M. Machine learning hidden symmetries. Phys. Rev. Lett. 2022, 128, 180201. [Google Scholar] [CrossRef] [PubMed]
  165. Tononi, G.; Koch, C. Consciousness: Here, there and everywhere? Philos. Trans. R. Soc. B Biol. Sci. 2015, 370, 20140167. [Google Scholar] [CrossRef] [PubMed]
  166. Walker, S.I. Top-down causation and the rise of information in the emergence of life. Information 2014, 5, 424–439. [Google Scholar] [CrossRef]
  167. Elder-Vass, D. The Causal Power of Social Structures: Emergence, Structure and Agency; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  168. Tononi, G.; Boly, M.; Massimini, M.; Koch, C. Integrated information theory: From consciousness to its physical substrate. Nat. Rev. Neurosci. 2016, 17, 450–461. [Google Scholar] [CrossRef]
  169. Holland, J.H. Hidden order. In Business Week-Domestic Edition; Addison-Wesley: Boston, MA, USA, 1995; p. 21. [Google Scholar]
  170. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
  171. Adamatzky, A.; Chen, G.; Bazzan, A.L.C.; Brasil, R.; Burguillo, J.C.; Corchado, E.; Davendra, D.; Lampinen, J.; Middendorf, M.; Ott, E.; et al. Emergence, Complexity and Computation; Springer: Berlin/Heidelberg, Germany, 2022; p. 10624. [Google Scholar]
  172. Kreyssig, P.; Dittrich, P. Emergent control. In Organic Computing—A Paradigm Shift for Complex Systems; Springer: Basel, Switzerland, 2011; pp. 67–78. [Google Scholar]
  173. Mitchell, M.; Hraber, P.; Crutchfield, J.P. Revisiting the edge of chaos: Evolving cellular automata to perform computations. arXiv 1993, arXiv:adap-org/9303003. [Google Scholar]
Figure 1. The causal models (left) column and the corresponding causal hierarchy (right) column.
Figure 1. The causal models (left) column and the corresponding causal hierarchy (right) column.
Entropy 26 00108 g001
Figure 2. The various types of causation that exist within multi-scale complex systems. In this figure, solid arrows represent common causation, which is generally accepted without much controversy. Dotted arrowed lines indicate a form of “causation” driven by supervenience, whereas dashed arrowed lines represent emergent causation, occurring either at an intra-level or downward.
Figure 2. The various types of causation that exist within multi-scale complex systems. In this figure, solid arrows represent common causation, which is generally accepted without much controversy. Dotted arrowed lines indicate a form of “causation” driven by supervenience, whereas dashed arrowed lines represent emergent causation, occurring either at an intra-level or downward.
Entropy 26 00108 g002
Figure 3. G-causality and G-autonomous.
Figure 3. G-causality and G-autonomous.
Entropy 26 00108 g003
Figure 4. G-autonomous.
Figure 4. G-autonomous.
Entropy 26 00108 g004
Figure 5. The basic idea of Erik Hoel’s causal emergence theory. The colored circles represent micro-states, whereas the colored squares represent macro-states.
Figure 5. The basic idea of Erik Hoel’s causal emergence theory. The colored circles represent micro-states, whereas the colored squares represent macro-states.
Entropy 26 00108 g005
Figure 6. Examples of effect coefficient, determinism, degeneration, and how coarse-graining can change them. (a) Deterministic system with zero degeneracy; (b) Deterministic system with zero effect coefficient due to high degeneracy; (c) Partially deterministic system; (d) Deterministic system with low effect coefficient; (e) Coarse-grained system from (c) or (d), resulting in a fully deterministic system with a high effect coefficient.
Figure 6. Examples of effect coefficient, determinism, degeneration, and how coarse-graining can change them. (a) Deterministic system with zero degeneracy; (b) Deterministic system with zero effect coefficient due to high degeneracy; (c) Partially deterministic system; (d) Deterministic system with low effect coefficient; (e) Coarse-grained system from (c) or (d), resulting in a fully deterministic system with a high effect coefficient.
Entropy 26 00108 g006
Figure 7. The mechanism and transition probability matrix of a boolean network. Each variable’s next state is determined by the combination of the current states of two other variables, as shown in the boolean network (a). If we examine an atomic subsystem with two inputs and one output, such as AB→C, the next state of C depends on the present states of A and B. If A = 0 and B = 0, the probability of C being 1 is 70%, whereas the probability of it being 0 is 30%, as indicated in the table (b). This same microscopic dynamics applies to other atomic subsystems such as AB→D, CD→A, and CD→B. When we expand to a subsystem of two inputs and two outputs, AB→CD and CD→AB, we obtain the transition probability matrix (TPM), as demonstrated in the table (c). Finally, we extend to the dynamics of the entire system ABCD→ABCD, whose TPM is represented as a heatmap (d). The effective information and relative metrics of the entire system are as follows: E I m i = 1.1486 , E f f = 0.2871 , D e t e r m i n i s m = 0.3390 , D e g e n e r a c y = 0.0519 .
Figure 7. The mechanism and transition probability matrix of a boolean network. Each variable’s next state is determined by the combination of the current states of two other variables, as shown in the boolean network (a). If we examine an atomic subsystem with two inputs and one output, such as AB→C, the next state of C depends on the present states of A and B. If A = 0 and B = 0, the probability of C being 1 is 70%, whereas the probability of it being 0 is 30%, as indicated in the table (b). This same microscopic dynamics applies to other atomic subsystems such as AB→D, CD→A, and CD→B. When we expand to a subsystem of two inputs and two outputs, AB→CD and CD→AB, we obtain the transition probability matrix (TPM), as demonstrated in the table (c). Finally, we extend to the dynamics of the entire system ABCD→ABCD, whose TPM is represented as a heatmap (d). The effective information and relative metrics of the entire system are as follows: E I m i = 1.1486 , E f f = 0.2871 , D e t e r m i n i s m = 0.3390 , D e g e n e r a c y = 0.0519 .
Entropy 26 00108 g007
Figure 8. An example showing how different coarse-graining strategies applied to the same boolean network, shown in Figure 7, can lead to different boolean dynamics. (a) Coarse-graining strategy 1. (b) Macro-dynamics resulting from strategy 1. (c) The corresponding heatmap of the TPM and the effective information metrics of the macroscopic system using method 1. (d) Coarse-graining strategy 2. (e,f) The macro-dynamics and TPM corresponding to coarse-graining strategy 2 in (d), respectively.
Figure 8. An example showing how different coarse-graining strategies applied to the same boolean network, shown in Figure 7, can lead to different boolean dynamics. (a) Coarse-graining strategy 1. (b) Macro-dynamics resulting from strategy 1. (c) The corresponding heatmap of the TPM and the effective information metrics of the macroscopic system using method 1. (d) Coarse-graining strategy 2. (e,f) The macro-dynamics and TPM corresponding to coarse-graining strategy 2 in (d), respectively.
Entropy 26 00108 g008
Figure 9. Venn diagram of PID. (a) Relationship of mutual information between Y and X1, X2. (bd) Examples showing that joint mutual information is solely contributed by redundant, synergistic, and unique information. (e) Redundancy lattice representation of (a).
Figure 9. Venn diagram of PID. (a) Relationship of mutual information between Y and X1, X2. (bd) Examples showing that joint mutual information is solely contributed by redundant, synergistic, and unique information. (e) Redundancy lattice representation of (a).
Entropy 26 00108 g009
Figure 10. (a) Redundancy lattice of “forward PID” and its target variable X t + 1 1 X t + 1 2 . (b) Redundancy lattice of “backward PID” and its target variable X t 1 X t 2 .
Figure 10. (a) Redundancy lattice of “forward PID” and its target variable X t + 1 1 X t + 1 2 . (b) Redundancy lattice of “backward PID” and its target variable X t 1 X t 2 .
Entropy 26 00108 g010
Figure 11. (a) Relationships between redundancy lattices of forward PID and backward PID. (b) Double-redundancy lattice, whose vertices correspond to the edges in (a). Distinct colors of dots and lines are employed to distinguish the source elements of X t .
Figure 11. (a) Relationships between redundancy lattices of forward PID and backward PID. (b) Double-redundancy lattice, whose vertices correspond to the edges in (a). Distinct colors of dots and lines are employed to distinguish the source elements of X t .
Entropy 26 00108 g011
Figure 12. Causal decoupling ( G ) and downward causation ( D ) are represented by a double-redundancy lattice. Distinct colors of dots are employed to distinguish the source elements of X t , (colors are corresponding to Figure 11).
Figure 12. Causal decoupling ( G ) and downward causation ( D ) are represented by a double-redundancy lattice. Distinct colors of dots are employed to distinguish the source elements of X t , (colors are corresponding to Figure 11).
Entropy 26 00108 g012
Figure 13. Synergistic causes can be further divided into causal decoupling and downward causation.
Figure 13. Synergistic causes can be further divided into causal decoupling and downward causation.
Entropy 26 00108 g013
Figure 14. The architecture and workflow of the Neural Information Squeezer (NIS). The NIS consists of three main components: an encoder ϕ , a dynamics learner f, and a decoder ϕ . The encoder ϕ comprises an invertible neural network ψ and a projection operator χ , which simply discards fixed dimensions output by ϕ . On the other hand, the decoder ϕ incorporates the same invertible neural network as the encoder but implements its inverse ψ 1 . In cases where the input is incomplete, it can be complemented using Gaussian random noise.
Figure 14. The architecture and workflow of the Neural Information Squeezer (NIS). The NIS consists of three main components: an encoder ϕ , a dynamics learner f, and a decoder ϕ . The encoder ϕ comprises an invertible neural network ψ and a projection operator χ , which simply discards fixed dimensions output by ϕ . On the other hand, the decoder ϕ incorporates the same invertible neural network as the encoder but implements its inverse ψ 1 . In cases where the input is incomplete, it can be complemented using Gaussian random noise.
Entropy 26 00108 g014
Figure 15. The workflow and architecture of our proposed framework, the Neural Information Squeezer Plus (NIS+), for discovering causal emergence within data. (a) Various forms of input data. (b) The framework for our proposed model, the NIS+, incorporates our previous model, the NIS. The boxes represent functions or neural networks, and an arrow pointing to a cross represents the operation of information discarding. x t and x t + 1 represent the observational data of micro-states, whereas x ^ t + 1 represents the predicted micro-state. y t = ϕ ( x t ) and y t + 1 = ϕ ( x t + 1 ) represent the macro-states obtained by encoding the micro-states using the encoder. y ^ t = ϕ ( x ^ t ) and y ^ t + 1 = ϕ ( x ^ t + 1 ) represent the predicted macro-states obtained by encoding the predictions of micro-states. The mathematical problems that each framework aims to solve are also illustrated in the figure. (c) The various output forms of the NIS+, which include the learned macro-dynamics, captured emergent patterns, functions of coarse-graining, and the degree of causal emergence.
Figure 15. The workflow and architecture of our proposed framework, the Neural Information Squeezer Plus (NIS+), for discovering causal emergence within data. (a) Various forms of input data. (b) The framework for our proposed model, the NIS+, incorporates our previous model, the NIS. The boxes represent functions or neural networks, and an arrow pointing to a cross represents the operation of information discarding. x t and x t + 1 represent the observational data of micro-states, whereas x ^ t + 1 represents the predicted micro-state. y t = ϕ ( x t ) and y t + 1 = ϕ ( x t + 1 ) represent the macro-states obtained by encoding the micro-states using the encoder. y ^ t = ϕ ( x ^ t ) and y ^ t + 1 = ϕ ( x ^ t + 1 ) represent the predicted macro-states obtained by encoding the predictions of micro-states. The mathematical problems that each framework aims to solve are also illustrated in the figure. (c) The various output forms of the NIS+, which include the learned macro-dynamics, captured emergent patterns, functions of coarse-graining, and the degree of causal emergence.
Entropy 26 00108 g015
Figure 16. Illustration of the workflow of a typical causal representation learning (CRL) agent. The top panel depicts a real block world with a robotic arm and the corresponding causal graph, representing relationships between variables in the real world, which is totally unknown to the agent. Additionally, the data are presented as images generated by the block world. The bottom panel depicts the structure of the causal representation learning agent. It utilizes a convolutional neural network (CNN) to extract meaningful representations from the data and perform downward tasks based on these representations. The encoder plays a crucial role in capturing relevant features and patterns from the input data, enabling the agent to learn and understand causal relationships.
Figure 16. Illustration of the workflow of a typical causal representation learning (CRL) agent. The top panel depicts a real block world with a robotic arm and the corresponding causal graph, representing relationships between variables in the real world, which is totally unknown to the agent. Additionally, the data are presented as images generated by the block world. The bottom panel depicts the structure of the causal representation learning agent. It utilizes a convolutional neural network (CNN) to extract meaningful representations from the data and perform downward tasks based on these representations. The encoder plays a crucial role in capturing relevant features and patterns from the input data, enabling the agent to learn and understand causal relationships.
Entropy 26 00108 g016
Figure 17. Comparison of causal emergence identification and causal representation learning. All the potential co-founders are ignored. The black arrows represent causal relations, whereas the blue arrows represent interactions between macro-variables. The dashed lines with arrows represent the information flows for coarse-graining, representation, or data generation.
Figure 17. Comparison of causal emergence identification and causal representation learning. All the potential co-founders are ignored. The black arrows represent causal relations, whereas the blue arrows represent interactions between macro-variables. The dashed lines with arrows represent the information flows for coarse-graining, representation, or data generation.
Entropy 26 00108 g017
Table 1. Comparison of causal emergence identification and causal representation learning.
Table 1. Comparison of causal emergence identification and causal representation learning.
Causal Emergence
Identification
Causal Representation
Learning
DataObservations (time series)
of micro-states
Raw data generated by
some causal mechanism in real life
Latent variablesMacro-statesCausal representations
Causal mechanismMacro-dynamicsCausal mechanisms
Mapping between data
and latent variables
Coarse-grainingRepresentation
Optimization for
causality
EI maximizationPrediction loss, disentanglement
ObjectiveFinding an optimal coarse-graining strategy
and a macro-dynamic that has a
stronger causal effect
Finding an optimal representation of the
raw data to ensure that the independent
causal mechanism can be realized by
the representations
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, B.; Zhang, J.; Lyu, A.; Wu, J.; Wang, Z.; Yang, M.; Liu, K.; Mou, M.; Cui, P. Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies. Entropy 2024, 26, 108. https://doi.org/10.3390/e26020108

AMA Style

Yuan B, Zhang J, Lyu A, Wu J, Wang Z, Yang M, Liu K, Mou M, Cui P. Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies. Entropy. 2024; 26(2):108. https://doi.org/10.3390/e26020108

Chicago/Turabian Style

Yuan, Bing, Jiang Zhang, Aobo Lyu, Jiayun Wu, Zhipeng Wang, Mingzhe Yang, Kaiwei Liu, Muyun Mou, and Peng Cui. 2024. "Emergence and Causality in Complex Systems: A Survey of Causal Emergence and Related Quantitative Studies" Entropy 26, no. 2: 108. https://doi.org/10.3390/e26020108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop