Next Article in Journal
Nonlinear Fault-Tolerant Vibration Control for Partial Actuator Fault of a Flexible Arm
Next Article in Special Issue
The Dynamics of a Turning Ship: Mathematical Analysis and Simulation Based on Free Body Diagrams and the Proposal of a Pleometric Index
Previous Article in Journal
Chaotic van der Pol Oscillator Control Algorithm Comparison
Previous Article in Special Issue
Search for Damped Oscillating Structures from Charged Pion Electromagnetic Form Factor Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Uncovering the Origins of Instability in Dynamical Systems: How Can the Attention Mechanism Help?

by
Nooshin Bahador
1,2,3 and
Milad Lankarany
1,2,3,4,*
1
Krembil Research Institute, University Health Network (UHN), 60 Leonard Ave., Toronto, ON M5T 0S8, Canada
2
Institute of Biomaterials & Biomedical Engineering (IBBME), University of Toronto, Toronto, ON M5S 3G9, Canada
3
KITE Research Institute, Toronto Rehabilitation Institute, University Health Network (UHN), Toronto, ON M5G 2A2, Canada
4
Department of Physiology, University of Toronto, Toronto, ON M5G 1V7, Canada
*
Author to whom correspondence should be addressed.
Dynamics 2023, 3(2), 214-233; https://doi.org/10.3390/dynamics3020013
Submission received: 28 January 2023 / Revised: 20 March 2023 / Accepted: 11 April 2023 / Published: 17 April 2023
(This article belongs to the Special Issue Recent Advances in Dynamic Phenomena)

Abstract

:
The behavior of the network and its stability are governed by both dynamics of the individual nodes, as well as their topological interconnections. The attention mechanism as an integral part of neural network models was initially designed for natural language processing (NLP) and, so far, has shown excellent performance in combining the dynamics of individual nodes and the coupling strengths between them within a network. Despite the undoubted impact of the attention mechanism, it is not yet clear why some nodes of a network obtain higher attention weights. To come up with more explainable solutions, we tried to look at the problem from a stability perspective. Based on stability theory, negative connections in a network can create feedback loops or other complex structures by allowing information to flow in the opposite direction. These structures play a critical role in the dynamics of a complex system and can contribute to abnormal synchronization, amplification, or suppression. We hypothesized that those nodes that are involved in organizing such structures could push the entire network into instability modes and therefore need more attention during analysis. To test this hypothesis, the attention mechanism, along with spectral and topological stability analyses, was performed on a real-world numerical problem, i.e., a linear Multi-Input Multi-Output state-space model of a piezoelectric tube actuator. The findings of our study suggest that the attention should be directed toward the collective behavior of imbalanced structures and polarity-driven structural instabilities within the network. The results demonstrated that the nodes receiving more attention cause more instability in the system. Our study provides a proof of concept to understand why perturbing some nodes of a network may cause dramatic changes in the network dynamics.

1. Introduction

In many networks, specific nodes at critical positions within the network act as drivers that push the system into particular modes of action [1]. Observing large-scale network catastrophes in sociological and biological systems, such as the widespread effects of epilepsy in brain networks, poses a few questions—How does a chaotic regime start in complex networks? Where should we look for spreading origins or initiators in the network? Which nodes are most influential in driving changes in the network’s dynamics? Why do these particular nodes have the potential ability to facilitate changes in the state of a system? Can imminent shifts be predicted within the network’s dynamics prior to the onset and to enhance preparedness? Answering these questions motivated us to explore how the local structures within a network cause deteriorating stability and push the network into a catastrophic regime. This study tried to leverage principles in stability theory and connect them to attention mechanisms in neural networks.
The attention mechanism is one of the widely used techniques in natural language processing and computer vision that focuses on the most informative parts of the data and significantly improves many processing tasks, such as image classification, object detection, etc. [2]. The attention mechanism can also help graph convolutional networks to focus on nodes with key contributions to the information processing of the graph [3]. In graph neural networks, it has been argued that instead of considering the entire local neighborhood, only nodes with higher attention values should be propagated. According to this assumption, the robustness of the network can be improved by only considering important nodes and ignoring misleading points [4]. Despite the tremendous success of this effective technique, the one thing that still lacks and has not been addressed much is an explanation of why the attention mechanism works for network analysis and what attention coefficients exactly reflect.
Seeking an explanation, this study tried to look at this problem from the stability analysis perspective. The stability concept in graph theory looks at how changes in a particular node can affect the rest of the network and how the connectivity of that particular node depends on other nodes in the network [5,6]. Some studies have tried to check the stability properties of graph neural networks to see how changes in the underlying topology can affect the output of the network [7]. In terms of model optimization, it has been discussed that unstable nodes in sparse regions of the network require to be pulled apart to improve the classification decision [8].
Focusing on the stability properties of the networks, the detection of spreading pathways within the network has been the focus of many recent studies. In cases where abnormalities, chaos, or instability can spread rapidly across the network, early spotting of the spreading origins is essential to hinder widespread harm. One example of such a condition is when a small perturbation within the brain network of an epileptic patient leads to seizure propagation at a life-threatening level [9].
A large body of literature has tried to rank the spreading ability of nodes in the network. It has been assumed that the nodes with either high nodal centrality or high betweenness centrality are influential in large-scale spreading [10,11]. However, this assumption turned out to not work for all the real-world networks, and there were cases in which the highly connected nodes or the nodes with the highest betweenness had little effect on the spreading process [12,13]. One study has argued that the topology of the network organization plays a key role in widespread phenomena. This study has also claimed that the spreading process may not necessarily originate in just a single node, but it can start from many nodes simultaneously [14]. There are some reported cases, including localized attacks on networks where spreading can happen locally by only covering a specific group of nodes [15,16,17,18,19]. Considering all of this different reported evidence, the question of how the spreading ability of nodes in the network should be ranked still remains under investigation.
The fact that the topological properties of a network affect the dynamical process [20] can suggest that spreading dynamics are rooted in some hidden structures in the network. It has been reported that complex temporal dynamics in real-world networks may be induced by the spatial dimension [21]. Looking at spatial aspects of chaotic dynamics, one study has argued that the dynamics of the system become chaotic because of homogeneity breaking [22]. There is also strong evidence that symmetry breaking can cause instabilities in networks [23]. Considering these claims and our initial assumption, we further assumed that the existence of hidden symmetry-breaking structures within the network might also cause the emergence of spreading dynamics.
Considering the fact that both the attention mechanism and stability analysis focus on influential nodes, the final question here is whether unstable nodes are the nodes that need more attention. We address this question in the following sections. The first subsection of the Method Section describes the case study, which is a real-world numerical problem. This problem is mapped from the state-space model into a graph representation for further analysis. The states of the model are considered individual nodes within the network. In Section 2.2 from the section “Materials and Methods”, an attention-enhanced graph convolutional network (AGCN) is used to classify the nodes of this network. After the learning process of the AGCN, an attention coefficient for each pair of nodes is extracted and nodes with higher attention coefficients are identified. To check our hypothesis, which stated that those nodes that have the potential to move the entire network into the unstable mode are the nodes that have higher attention coefficients, three different stability analyses are performed. Finally, the nodes with higher instability risk are identified and compared to those with higher attention coefficients.

2. Materials and Methods

2.1. Simulated Dynamical System

Dynamical systems can be stabilized by state feedback, which involves using the state vector for controlling system dynamics. This feedback mechanism can be applied to controllable states. Identifying the most important states can be very helpful in designing an optimum closed-loop control system. This study tried to identify important states using the attention mechanism, and it proves that these important nodes are the ones that show more tendency toward instability.
One of the dynamical systems that require a feedback mechanism to reach stability is Piezoelectric tube actuators. The problem modeling of these actuators has been considered a real-world numerical example in this study. These actuators are frequently used in micro/nano-scale applications, and they are highly sensitive to uncertainties, including environmental variations. The piezoelectric tube actuator can be expressed by a linear Multi-Input Multi-Output state-space model using the following equations [24]:
x ˙ t = A x t + B u t y t = C x t + D u ( t )
where A, B, C, and D are, respectively, the state matrix, input matrix, output matrix, and feedforward matrix. Variables x and y are, respectively, state and output vectors.
A = 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 2.7480 × 10 8 1.3083 × 10 8 4.2614 × 10 6 517.0544 0 0 2.6331 × 10 3 0.9492 × 10 3 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 49.4899 191.8224 5.2346 × 10 8 3.6549 × 10 8 1.7819 × 10 7 239.0092
Figure 1 shows a graph that was created from the dynamical system (1), considering A as an adjacency matrix representing the topology of a network. The random feature set in Table 1 was also assigned to each node as attributes. It should be noted that the coefficient matrices of B, C, and D in the dynamical system (1) were not utilized to construct the graph. Instead, the adjacency matrix representing the topology of a network was created based on the coefficient matrix of A. The A matrix in the state-space representation does play a central role in determining the dynamics of the system, as it describes how the system evolves over time. The other matrices, such as the input, output, and feedforward matrices, often describe the relationships between the inputs, outputs, and states of the system but may not directly determine its dynamics.

2.2. Attention Mechanism

In this study, an attention-enhanced graph convolutional network (AGCN), including different modules, was used for node classification. These modules are explained in the Section 2.2.1, Section 2.2.2, Section 2.2.3 and Section 2.2.4.

2.2.1. First Module: Initial Node Feature Embedding

The first module performs a self-attention operator on the nodes, which is a simple dot product (multiplying the node features matrix by its transpose) that helps us to represent the relationship among features. The intuition behind the self-attention operator is to express how two feature vectors are related in the input space. In this operation, a weighted average over all the input vectors is taken. A visual illustration of this weighted average is shown in Figure 2. The dot product over each pair of feature vectors gives their corresponding weights. If the sign of a feature matches with the other one, this weight receives a positive term, and if the sign does not match, the corresponding weight is negative. The magnitude of the weight indicates how much the feature should contribute to the total score. As the weight value produced by this self-attention operator lies anywhere between negative and positive infinity, both Leaky ReLU and SoftMax operators need to be applied to map all the weight values between zero and one and for their summation to be one.
X = x 1 , x 2 , ,   x i R N ω s e l f = X X T ω s e l f = ω 11 , ω 12 , Y = ω s e l f X Y = y 1 , y 2 ,   ,   y i R N Y = S o f t m a x L e a k y R e l u Y Y = y 1 , y 2 ,
where X is the matrix of the nodes’ features. ωself indicates the self-attention weights. Y is the weighted average of the node features. Y′ is the weighted average of features passed through activation functions.
This weighted average of the node features produces a new set of node features as the output of the self-attention operator, which forms the inputs for the next module.

2.2.2. Second Module: Learnable Attention Mechanism

The second module is a single-layer feedforward neural network parameterized by the attention weight vector ( ω A t t ). In this module, the feature vectors of each pair in a new set of nodes (produced in the previous module) are concatenated and passed through Leaky ReLU and SoftMax operators. The goal here is to extract the attention coefficient for each pair of nodes, which represents the importance of one node’s feature to the feature of another one [26].
α i j = S o f t m a x L e a k y R e l u ω A t t y i y j ω A t t = ω A t t 1 , ω A t t 2 , ,   ω A t t i R 2 N ω α = α 12 , α 13 , ,   α i j ( R N × R N )
where ω A t t is the attention weight vector. y is the weighted average of features passed through activation functions. ω α is the attention coefficient matrix.

2.2.3. Third Module: Graph Convolution

The third module performs features aggregation from the neighbors of each node. This can be calculated by the multiplication of adjacency and feature matrices. It should be considered that the features of the node itself are as important as its neighbors. To consider the features of the node itself, an identity matrix needs to be added to the adjacency matrix (A) to obtain a new adjacency matrix (Ã). To prevent exploding/vanishing gradients because of high-degree/low-degree nodes and to reduce the sensitivity of the network to the scale of input data, the matrix multiplication needs to be scaled according to the node degrees (scaling by both rows and columns). This scaling places more weight on the low-degree nodes and reduces the impact of nodes with high degrees. The motivation behind this scaling is that nodes with low degrees have greater influences on their neighbors, whereas nodes with high degrees have lower effects as they spread their influence on too many neighbors. As scaling is performed twice (once across rows and once across columns), the square root of the node degree is taken into account. The influence of one node feature on the other nodes can also be reflected by the dot product of the new adjacency matrix with the attention coefficient’s matrix. Finally, graph convolution can be completed by putting all these modules together and forming a forward model with a learnable weight matrix of W.
A ~ = A + I D ~ = j A ~ i j A ^ = D ~ 1 / 2 A ~ D ~ 1 / 2 Y = S o f t m a x L e a k y R e l u A ^ ω α X   W
where D is the degree matrix. Ã is the normalized adjacency matrix with added self-loops.

2.2.4. Final Module: Backpropagation and Training

The goal here with using backpropagation is to update each weight in the attention layer (matrix of ωAtt) and convolution layer (matrix of W) so that the actual output gets closer to the target output. To do this, the partial derivative of error (gradient) with respect to these weights is calculated. It should be considered that the partial derivative of the SoftMax function is the output × (1 − output), where W is a learnable weight matrix. Ã is the new adjacency matrix, and ω A t t is the attention weight vector in a single-layer feedforward neural network. ω α is the attention coefficient matrix for each pair of nodes, y t a r g e t is the target output, and y is the actual output.

2.3. Spectral Stability Analysis

The spectral stability of a network is governed by the largest negative eigenvalue of its adjacency matrix [27]. Our hypothesis is that nodes that need more attention are the ones that can push the entire network into unstable mode. To test our hypothesis and to check the effect of each node on the stability of the network, we looked at how the perturbation in one column of the adjacency matrix [9] reflected in its largest eigenvalue. The perturbation level was initially set to 0.5 and gradually increased to 3. The following matrix shows the resulting adjacency matrix after perturbing node 1 by ∆. Those nodes for which the largest negative eigenvalue of matrix A ^ 2 moves towards zero while their perturbation level increases have the potential to push the entire network into the unstable mode.
A ^ 2 = 0 1 + 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 2.7480 × 10 8 1.3083 × 10 8 + 4.2614 × 10 6 517.0544 0 0 2.6331 × 10 3 0.9492 × 10 3 0 1 + 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 49.4899 191.8224 5.2346 × 10 8 3.6549 × 10 8 1.7819 × 10 7 239.0092
where ∆ is the perturbation level.

2.4. Topological Stability Analysis

How are the connections with positive and negative signs arranged within the network? And how do such arrangements affect network stability? Positive and negative signs are, respectively, referred to as the synchronous and anti-synchronous correlation. According to structural balance theory [28], the stability of a three-entity system can be investigated by a signed association between two entities in the presence of a third party. This could be generalized to any signed network by considering the associations between its motifs/subgraphs and the signed links within the motifs. A motif is a recurring pattern of interconnections within the graph, formed by a subset of nodes with a path between each pair of nodes. The collective behavior of the imbalanced motifs may push the network toward an unstable state. Considering all possible ways to connect, a motif is structurally imbalanced when the multiplication of the signs on its edges turns negative. In a signed graph, counting the number of imbalanced motifs can tell us about the stability of the network. Figure 3 shows some examples of imbalanced arrangements.
The influence of each node on the stability of the network can be determined by the number of times that a node appears in the imbalanced motifs. To better quantify this influence, a measure is defined that not only considers the imbalanced motifs with different orders but also considers the weights of paths which form a cycle within these motifs. For each node and for each imbalanced motif of size 3 that includes that node, the weights of the paths are multiplied and then added together. The same procedure is repeated for the imbalanced motifs of sizes 4, 5, and 6. The cube root of absolute value for the multiplication of these three calculations is then calculated, and the total cost associated with that node is obtained as follows:
C T N = W G N 3 × W G N 4 × W G N 5 × W G N 6 3 W G N 3 = i , j , k ω i j ω j k ω k i D 2 W G N 4 = i , j , k , m ω i j ω j k ω k m ω m i D 2 W G N 5 = i , j , k , m , n ω i j ω j k ω k m ω m n ω n i D 2 W G N 6 = i , j , k , m , n , h ω i j ω j k ω k m ω m n ω n h ω h i D 2
where ω is the weight of the path between each pair of nodes within an imbalanced motif. The terms G N 3 , G N 4 , G N 5 , and G N 6 , respectively, refer to the subset of all possible imbalanced motifs of sizes 3, 4, 5, and 6 that include one specific node. D is the degree of the corresponding node. W corresponds to the normalized sum over the products of motif paths calculated for each node.

2.5. Symmetry-Breaking Stability Analysis

In complex networks, symmetry breaking means that some nodes attract or transmit the flow of information more than other nodes due to the network dynamics or the presence of external stimuli. This can lead to the emergence of instability within a network. This phenomenon can occur through the process of self-organization when the nodes in a network interact in a way that they form specific patterns or structures [29]. If a network experiences symmetry breaking, some nodes may begin to differentiate from other nodes and form distinct sub-networks. This process of differentiation can be thought of as a bifurcation, as it represents a sudden and significant change in the structure and behavior of the network. Occurrences of symmetry breaking can be seen in nature, for example, when vascular systems, such as river basins, evolve [30]. This process of differentiation can trigger a cascade of further differentiations within those sub-networks. As the differentiations continue to cascade through the network, they can lead to the emergence of a chaotic regime.
As network dysfunction can be a function of microscale structures and flow distributions [31], and spatial symmetry breaking is one way of studying patterns of information flow, this subsection aimed at identifying spreaders of instability in the network by exploring spatial symmetry-breaking behavior in local flow structures.
Inspired by Flabellate [32], more than two paths can be branched off from each bifurcation point. They are called flabellate-shaped bifurcation in this study. Depending on the polarity and strength of individual connections within this symmetry-breaking structure, a polarity transition can occur to form a fractal dipole (Figure 4). This topological polarity transition breaks the balance and has the potential to spread instability across the network.
To find bifurcation nodes in a network where symmetry breaking along with polarity transition occurs, firstly, the hidden structure of information flow needs to be extracted. As topological properties of a system affect its dynamics, extracting hidden information flow structures in the network provides a useful tool for understanding the dynamical behavior of the network. A graph-based random walk is one of the well-known algorithms inspired by natural language processing that can reveal these local structures of information flow [33]. Walking on the graph means moving from one node to another in the direction of the edge, and the flow of information within the network corresponds to the walker stepping between nodes. In addition to information flow, activity dynamics on networks can also be modeled by a graph-based random walk [34]. Considering that the random walk on a network can model information spreading and capture network dynamics [35], we leveraged a graph-based random walk algorithm to investigate the existence of symmetry-breaking structures that are not visible in the network and ranked the nodes of the network based on their ability in pushing the network into unstable modes. These random walks represent the local structure of information flow distribution and show how information from one node spreads to the other neighboring nodes.
Our goal is to understand whether hidden local structures of information flow can push the network into unstable modes. We hypothesized that the emergence of local polarized flabellate-shaped bifurcation in the information flow pathway causes symmetry breaking and identifies the initiator of instability within the network.
Each division of bifurcation can branch off in the form of nested projections accompanied by a polarity transition. These polarized structures of information flow with fractal-like geometry tend to propagate perturbation faster across the network.
Inspired by the formula for an electric dipole moment for a pair of charges that is computed based on the magnitude of charges multiplied by the distance between them, a measure was introduced to represent the overall moment generated by the potential symmetric fractal dipole. The individual nodes within the graph are considered charges with a unit magnitude, and the edge weight represents the distance between two charges. In this study, this measure was called the normalized summation of transition cost (NSTC). Given an array of weights of traversed edges in each two-step random walk starting from node k, the product of edge weights corresponding to each path is computed. All the products of the path’s weights traversed from each node are then summed up together and normalized by dividing by N k , where k is the index of the starting node, and N is the total number of paths traversed from the starting node. The normalized summation of the transition cost as a measure of the overall moment generated by the potential symmetric fractal dipole is:
N S T C k = N k ω k N k ,   ω k = ω k i , ω i j N S T C k = k = 0 n ω k i × ω i j N k
where k is the starting node, i is the visited node in the first step, and j is the visited node in the second step.
The more the N S T C k is negative, the stronger the topological polarity transition is. Nodes become more unstable, given a stronger topological polarity transition. Unstable nodes have a higher potential to spread the instability across the network. The spreading ability of nodes is ranked based on the negativity of N S T C k .

2.6. Theoretical Justification for Analysis Approaches

Various theories have been developed that provide mathematical and conceptual tools for comprehending complex systems across different domains. Drawing inspiration from these theories, we aim to assess the stability of our complex system from multiple perspectives, including spectral, topological stability, and symmetry-breaking viewpoints. For example, theoretical justification for polarity-driven structural instabilities within a network can be explained by the bipolar fuzzy set theory, which captures the bipolar nature of real-world systems and allows for more accurate representation [36,37]. According to the Equilibrium energy and stability measures for bipolar dynamics [38,39,40], networks can attain a stable equilibrium state by balancing opposing interactions, such as attraction and repulsion, positive and negative feedback, or excitation and inhibition. When this balance is disrupted due to a change in the strength, sign, or topology of the interactions between network components caused by external stimuli, the network may experience structural instability that results in a new equilibrium state or even a bifurcation to an entirely different regime.

3. Results

Our hypothesis was tested on the state-space model of the actuator, represented by equation (1). First, the attention coefficients were extracted for all the nodes using an AGCN. Then, three different stability analyses were performed, and the nodes with a higher instability risk were identified in each analysis. These three stability analyses included: 1—spectral stability analysis, 2—topological stability analysis, and 3—symmetry-breaking stability analysis.

3.1. Attention Mechanism

Considering the connections between the nodes in Figure 1, the nodes of 0, 3, 4, and 7 form one cluster (4-degree nodes), and the nodes of 1, 2, 5, and 6 form another cluster (2-degree nodes). Two different scenarios were tested. In the first scenario, an AGCN model was trained to classify these two clusters. In the second scenario, the perturbation on the feature set of node 0 was applied, and an AGCN model was trained to classify these two clusters in the presence of the node feature perturbation. The perturbation of the feature set was performed by multiplying a factor of 2. The labels of 0.01 and 0.2 for the first and second clusters were, respectively, assigned.
Figure 5 and Figure 6 show the training loss as a function of iteration numbers for two scenarios, namely, without and with perturbation. In both scenarios, the training loss approximately converged to a loss value of 0.0028 after 500 iterations, confirming the robustness of an AGCN model to feature perturbation.
Table 2 compares the model predictions against truth labels for the above-mentioned scenarios. The model predictions summarized in Table 2 were not affected by the feature perturbation of node 1, indicating the robustness of an AGCN model with respect to feature perturbation.
Figure 7 shows that nodes #2 and #6 have the highest attention coefficients.

3.2. Spectral Stability Analysis

To test our hypothesis and check the effect of each node on the stability of the network, we looked at how the perturbation in one column of the adjacency matrix [9] reflected in its largest eigenvalue. To verify the need for unstable nodes for more attention, we performed spectral stability analysis and calculated the change in the largest eigenvalue of the adjacency matrix by increasing the perturbation level. Figure 8 shows how different nodes in the graph responded to an increase in the perturbation level. As seen in Figure 8, nodes 2 and 6 are those that may move the system towards instability because the largest eigenvalue gets closer to zero as the perturbation level on these nodes increases.

3.3. Topological Stability Analysis

To check to what extent unstable nodes are involved in imbalanced motifs within the networks, a topological stability analysis was performed. The goal was to detect those nodes that lie within the path of imbalanced motifs of different orders. Figure 9 shows the trajectory starts at node 2 and traverses within three sample motifs of a different order.
In the network under study, all the imbalanced motifs of size 3 that passed a specific node were first extracted. The product of the weights of the paths within each motif was then performed and stored as a single score. Similar scores were computed for other motifs that passed the same node, and all these scores were summed up to obtain the total score for each node. The total score of each node was normalized based on the square of the node’s degree. A similar procedure was repeated for imbalanced motifs of sizes 4, 5, and 6. Table 3 summarizes the total scores for each individual node and for each order of motif. The last column of Table 3 shows the total cost obtained from the multiplication of these three scores and takes the cube root of the absolute value of it. Figure 10 provides a visual representation of the total cost for each node and reflects the potential role of nodes 2 and 6 in moving the network into unstable mode.

3.4. Symmetry-Breaking Stability Analysis

To confirm whether the unstable nodes contribute to some polarized structures within the network, a symmetry-breaking stability analysis was performed. To do this, the local structure of the information flow distribution was extracted for each node. The process of extracting these information flow distributions for two single nodes has been plotted in Figure 11 and Figure 12. Figure 11 shows all the paths that start at node 0 and traverse within a two-step random walk. A similar figure has been plotted for random walks starting from node 2 (Figure 12).
By simultaneously plotting all the random walks corresponding to each node (Figure 13), a clear pattern of flabellate-shaped bifurcation appeared on nodes 2 and 6.
As shown in Figure 14, nodes 2 and 6 are those influential spreaders able to push the network into unstable mode.

4. Discussion and Conclusions

This study provided a proof of concept for the triangular relationships between the attention mechanism, instability, and structural dynamics in the network. We showed that the mechanism that enables a machine learning model to focus on relevant nodes could be explained from the perspective of structural dynamics with its inherent instability. Here, we studied such triangle relationships in a linear dynamical system whose outcomes helped to compensate for the lack of explainability in the attention mechanism. In future studies, we aim to expand our investigations for nonlinear and nonstationary dynamical systems.
The contributions of this study bring several interesting insights: First, this study provided evidence for the relationship between the attention mechanism, dynamics, and unstable nodes. It was found that the most relevant parts of the input data in graph neural networks are those that have the ability to change the network dynamics. This study tried to explain the attention mechanism through the lens of instability analysis. Second, it was found that the collective behavior of the imbalanced motifs in the network is also determinative in changing network dynamics, and this gave evidence that we need to pay more attention to. Third, we observed polarity-driven instabilities in hidden fractal patterns in the network, and this shifted the analytic strategy to paying more attention to hidden structures of polarity transition.
We showed that the stability analysis offers a promising solution for performing the attention mechanism in a graph convolutional network faster and more efficiently by reducing the computational complexity, increasing the interpretability, and eliminating sensitivity to hyperparameters. Ranking the stability properties of nodes makes attention models more transparent and explainable and can be applied to a wide range of tasks, including weight pruning [41], sparsification, and reducing the number of non-zero weights in the network [42], making structural bias [43], etc.
The intent of these contributions is to open doors for finding explainable tools that are able to speed up the process of training in graph machine learning. We want to know if we can make the process of graph machine learning more adaptive by incorporating knowledge from stability analysis. Can prior knowledge be incorporated into the graph attention network through stability analysis? How can this help to improve the accuracy of graph attention networks? If we already know from stability analysis which nodes need attention before conducting any learning process, how can this speed up the process of aggregating information in node embedding? Can the attention mechanism be replaced with stability analysis? Can we get rid of hyperparameter tuning in mechanisms such as biased random walks by determining the transition probability based on their spreading ability and stability analysis? These are the kinds of questions that we will be answering in our upcoming works.
An important aspect to consider for any future work is to apply bipolar fuzzy set theory as a theoretical framework to better understand polarity-driven structural instabilities within networks. By capturing the bipolar nature of real-world systems, this approach allows for a more accurate representation of complex network dynamics [36,37]. Furthermore, stability measures for bipolar dynamics provide a means to study how networks achieve a stable equilibrium state by balancing positive and negative interactions [38,39,40]. These measures can help us investigate how external stimuli that change the strength, sign, or topology of the interactions between network components can disrupt this balance, leading to structural instabilities. We believe that applying these theories can enhance our understanding of the underlying mechanisms behind the emergence of structural instabilities in networks.

Author Contributions

Conceptualization, N.B. and M.L.; methodology, N.B.; software, N.B.; validation, M.L.; formal analysis, N.B.; investigation, M.L.; resources, M.L.; writing—original draft preparation, N.B.; writing—review and editing, N.B. and M.L.; visualization, N.B..; supervision, M.L.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

Thank you for the support provided by: (1) J.P. the Bickell Foundation—medical research, and (2) Finnish Parkinson’s Foundation.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In symmetry-breaking stability analysis, the following calculations are the process of computing the spreading ability of each node based on Equation (7). Figure A1 shows one example of the path’s weights traversed during a single random walk.
Figure A1. Path’s weights traversed from node 0 during a single random walk.
Figure A1. Path’s weights traversed from node 0 during a single random walk.
Dynamics 03 00013 g0a1
N S T C 0 = ω 01 × ω 12 + ω 01 × ω 16 + ω 05 × ω 52 + ω 05 × ω 56 N 0 = 1 × 1 + 1 × 1 + 1 × 1 + 1 × 1 4 = 1
N S T C 1 = ω 12 × ω 23 + ω 12 × ω 27 + ω 16 × ω 63 + ω 16 × ω 67 N 1 = 1 × 1 + 1 × 1 + 1 × 1 + 1 × 1 4 = 1
N S T C 2 = ω 23 × ω 30 + ω 23 × ω 31 + ω 23 × ω 36 + ω 23 × ω 37 + ω 27 × ω 73 + ω 27 × ω 74 + ω 27 × ω 75 + ω 27 × ω 76 N 2 = 1 × 274800000 + 1 × 130830000 + 1 × 2633.1 + 1 × 949.2 + 1 × 191.8224 + 1 × 523460000 + 1 × 365490000 + 1 × 17819000 8 = 1.3134 × 10 8
N S T C 3 = ω 30 × ω 01 + ω 30 × ω 05 + ω 31 × ω 12 + ω 31 × ω 16 + ω 32 × ω 27 + ω 36 × ω 67 + ω 37 × ω 72 + ω 37 × ω 74 + ω 37 × ω 75 + ω 37 × ω 76 N 3 = 274800000 × 1 + 274800000 × 1 + 130830000 × 1 + 130830000 × 1 + 4261400 × 1 + 2633.1 × 1 + 949.2 × 49.4899 + 949.2 × 523460000 + 949.2 × 365490000 + 949.2 × 17819000 10 = 8.6041 × 10 10
N S T C 4 = ω 41 × ω 12 + ω 41 × ω 16 + ω 45 × ω 52 + ω 45 × ω 56 N 4 = 1 × 1 + 1 × 1 + 1 × 1 + 1 × 1 4 = 1
N S T C 5 = ω 52 × ω 23 + ω 52 × ω 27 + ω 56 × ω 63 + ω 56 × ω 67 N 5 = 1 × 1 + 1 × 1 + 1 × 1 + 1 × 1 4 = 1
N S T C 6 = ω 63 × ω 30 + ω 63 × ω 31 + ω 63 × ω 32 + ω 63 × ω 37 + ω 67 × ω 72 + ω 67 × ω 73 + ω 67 × ω 74 + ω 67 × ω 75 N 2 = 1 × 274800000 + 1 × 130830000 + 1 × 4261400 + 1 × 949.2 + 1 × 49.4899 + 1 × 191.8224 + 1 × 523460000 + 1 × 365490000 8 = 1.0372 × 10 8
N S T C 7 = ω 72 × ω 23 + ω 73 × ω 30 + ω 73 × ω 31 + ω 73 × ω 32 + ω 73 × ω 36 + ω 74 × ω 41 + ω 74 × ω 45 + ω 75 × ω 52 + ω 75 × ω 56 + ω 76 × ω 63 N 7 = 49.4899 × 1 + 191.8224 × 274800000 + 191.8224 × 130830000 + 191.8224 × 4261400 + 191.8224 × 2633.1 + 523460000 × 1 + 523460000 × 1 + 365490000 × 1 + ( 365490000 × 1 ) + ( 17819000 × 1 ) 10 = 2.6639 × 10 9

References

  1. Gu, S.; Pasqualetti, F.; Cieslak, M.; Telesford, Q.K.; Yu, A.B.; Kahn, A.E.; Medaglia, J.D.; Vettel, J.M.; Miller, M.B.; Grafton, S.T.; et al. Controllability of structural brain networks. Nat. Commun. 2015, 6, 8414. [Google Scholar] [CrossRef]
  2. Chen, C.; Zhao, X.; Wang, J.; Li, D.; Guan, Y.; Hong, J. Dynamic graph convolutional network for assembly behavior recognition based on attention mechanism and multi-scale feature fusion. Sci. Rep. 2022, 12, 7394. [Google Scholar] [CrossRef] [PubMed]
  3. Zhou, P.; Cao, Y.; Li, M.; Ma, Y.; Chen, C.; Gan, X.; Wu, J.; Lv, X. HCCANet: Histopathological image grading of colorectal cancer using CNN based on multichannel fusion attention mechanism. Sci. Rep. 2022, 12, 15103. [Google Scholar] [CrossRef]
  4. Knyazev, B.; Taylor, G.W.; Amer, M. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS). arXiv 2019, arXiv:1905.02850. [Google Scholar]
  5. Pirani, M.; Costa, T.; Sundaram, S. Stability of dynamical systems on a graph. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 613–618. [Google Scholar] [CrossRef]
  6. Meeks, L.; Rosenberg, D.E. High Influence: Identifying and Ranking Stability, Topological Significance, and Redundancies in Water Resource Networks. J. Water Resour. Plan. Manag. 2017, 143, 04017012. [Google Scholar] [CrossRef]
  7. Gama, F.; Bruna, J.; Ribeiro, A. Stability Properties of Graph Neural Networks. IEEE Trans. Signal Process. 2020, 68, 5680–5695. [Google Scholar] [CrossRef]
  8. Yang, F.; Cao, Y.; Xue, Q.; Jin, S.; Li, X.; Zhang, W. Contrastive Embedding Distribution Refinement and Entropy-Aware Attention for 3D Point Cloud Classification. arXiv 2022, arXiv:2201.11388. [Google Scholar]
  9. Li, A.; Huynh, C.; Fitzgerald, Z.; Cajigas, I.; Brusko, D.; Jagid, J.; Claudio, A.O.; Kanner, A.M.; Hopp, J.; Chen, S.; et al. Neural fragility as an EEG marker of the seizure onset zone. Nat. Neurosci. 2021, 24, 1465–1474, Erratum in Nat. Neurosci. 2022, 25, 530. [Google Scholar] [CrossRef]
  10. Zhang, Y.; Zhang, Z.; Wei, D.; Deng, Y. Centrality Measure in Weighted Networks Based on an Amoeboid Algorithm. J. Inf. Comput. Sci. 2012, 9, 369–376. [Google Scholar]
  11. Piraveenan, M.; Prokopenko, M.; Hossain, L. Percolation centrality: Quantifying graph-theoretic impact of nodes during percolation in networks. PLoS ONE 2013, 8, e53095. [Google Scholar] [CrossRef]
  12. Avena-Koenigsberger, A.; Mišić, B.; Hawkins, R.X.D.; Griffa, A.; Hagmann, P.; Goñi, J.; Sporns, O. Path ensembles and a tradeoff between communication efficiency and resilience in the human connectome. Brain Struct. Funct. 2017, 222, 603–618. [Google Scholar] [CrossRef]
  13. Kwon, H.; Choi, Y.H.; Lee, J.M. A Physarum Centrality Measure of the Human Brain Network. Sci. Rep. 2019, 9, 5907. [Google Scholar] [CrossRef] [PubMed]
  14. Kitsak, M.; Gallos, L.; Havlin, S.; Liljeros, F.; Muchnik, L.; Stanley, H.E.; Makse, H.A. Identification of influential spreaders in complex networks. Nat. Phys. 2010, 6, 888–893. [Google Scholar] [CrossRef]
  15. Sun, Y.; Ma, L.; Zeng, A.; Wang, W.-X. Spreading to localized targets in complex networks. Sci. Rep. 2016, 6, 38865. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, C.; Zhou, S.; Miller, J.; Cox, I.J.; Chain, B.M. Optimizing Hybrid Spreading in Metapopulations. Sci. Rep. 2015, 5, 9924. [Google Scholar] [CrossRef] [PubMed]
  17. Davis, J.T.; Chinazzi, M.; Perra, N.; Mu, K.; Piontti, A.P.Y.; Ajelli, M.; Dean, N.E.; Gioannini, C.; Litvinova, M.; Merler, S.; et al. Cryptic transmission of SARS-CoV-2 and the first COVID-19 wave. Nature 2021, 600, 127–132. [Google Scholar] [CrossRef]
  18. Le Treut, G.; Huber, G.; Kamb, M.; Kawagoe, K.; McGeever, A.; Miller, J.; Pnini, R.; Veytsman, B.; Yllanes, D. A high-resolution flux-matrix model describes the spread of diseases in a spatial network and the effect of mitigation strategies. Sci. Rep. 2022, 12, 15946. [Google Scholar] [CrossRef]
  19. Wang, W.; Tang, M.; Yang, H.; Do, Y.; Lai, Y.-C.; Lee, G. Asymmetrically interacting spreading dynamics on complex layered networks. Sci. Rep. 2014, 4, 5097. [Google Scholar] [CrossRef]
  20. Salnikov, V.; Schaub, M.; Lambiotte, R. Using higher-order Markov models to reveal flow-based communities in networks. Sci. Rep. 2016, 6, 23194. [Google Scholar] [CrossRef]
  21. Pascual, M. Diffusion-induced chaos in a spatial predator–prey system. Proc. R. Soc. Lond. Ser. B Biol. Sci. 1993, 251, 1–7. [Google Scholar]
  22. Petrovskii, S.; Li, B.-L.; Malchow, H. Quantification of the Spatial Aspect of Chaotic Dynamics in Biological and Chemical Systems. Bull. Math. Biol. 2003, 65, 425–446. [Google Scholar] [CrossRef] [PubMed]
  23. Nicolaou, Z.G.; Case, D.J.; Wee, E.B.v.d.; Driscoll, M.M.; Motter, A.E. Heterogeneity-stabilized homogeneous states in driven media. Nat. Commun. 2021, 12, 4486. [Google Scholar] [CrossRef] [PubMed]
  24. Hammouche, M.; Lutz, P.; Rakotondrabe, M. Robust and Optimal Output-Feedback Control for Interval State-Space Model: Application to a Two-Degrees-of-Freedom Piezoelectric Tube Actuator. Journal of Dynamic Systems, Measurement, and Control. Am. Soc. Mech. Eng. 2018, 141, 021008. [Google Scholar]
  25. Bloem, P. Transformers from Scratch; VU University: Amsterdam, The Netherlands, 2019. [Google Scholar]
  26. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph Attention Networks. arXiv 2018, arXiv:1710.10903. [Google Scholar]
  27. Chen, B.-S.; Kung, J.-Y. Robust stability of a structured perturbation system in state space models. In Proceedings of the 27th IEEE Conference on Decision and Control, Austin, TX, USA, 7–9 December 1988; Volume 1, pp. 121–122. [Google Scholar] [CrossRef]
  28. Saberi, M.; Khosrowabadi, R.; Khatibi, A.; Misic, B.; Jafari, G. Topological impact of negative links on the stability of resting-state brain network. Sci. Rep. 2021, 11, 2176. [Google Scholar] [CrossRef]
  29. Golubitsky, M.; Stewart, I. Symmetry and Bifurcation in Biology; Banff International Research Station (BIRS): Banff, AB, Canada, 2003. [Google Scholar]
  30. Ruzzenenti, F.; Garlaschelli, D.; Basosi, R. Complex Networks and Symmetry II: Reciprocity and Evolution of World Trade. Symmetry 2010, 2, 1710–1744. [Google Scholar] [CrossRef]
  31. Goirand, F.; Le Borgne, T.; Lorthois, S. Network-driven anomalous transport is a fundamental component of brain microvascular dysfunction. Nat. Commun. 2021, 12, 7295. [Google Scholar] [CrossRef]
  32. Broussard, M.A. Diagram of lamellate antenna, 27 March 2016, based on File: Ten-lined June beetle Close-up.jpg. Available online: https://commons.wikimedia.org/wiki/File:Insect-antenna_lamellate.svg (accessed on 28 January 2023).
  33. Sanchez-Rodriguez, L.M.; Iturria-Medina, Y.; Mouches, P.; Sotero, R.C. Detecting brain network communities: Considering the role of information flow and its different temporal scales. NeuroImage 2021, 225, 117431. [Google Scholar] [CrossRef] [PubMed]
  34. Fallani Fde, V.; Costa Lda, F.; Rodriguez, F.A.; Astolfi, L.; Vecchiato, G.; Toppi, J.; Borghini, G.; Cincotti, F.; Mattia, D.; Salinari, S.; et al. A graph-theoretical approach in brain functional networks. Possible implications in EEG studies. Nonlinear Biomed. Phys. 2010, 4 (Suppl. 1), S8. [Google Scholar] [CrossRef] [PubMed]
  35. Rosvall, M.; Esquivel, A.; Lancichinetti, A.; West, J.D.; Lambiotte, R. Memory in network flows and its effects on spreading dynamics and community detection. Nat. Commun. 2014, 5, 4630. [Google Scholar] [CrossRef]
  36. Zhang, W.-R. Bipolar fuzzy sets and relations: A computational framework for cognitive modeling and multiagent decision analysis. In Proceedings of the 1st International Joint Conference of the North American Fuzzy Information Processing Society Biannual Conference, San Antonio, TX, USA, 18–21 December 1994; pp. 305–309. [Google Scholar]
  37. Zhang, W.R. NPN fuzzy sets and NPN qualitative algebra: A computational framework for bipolar cognitive modeling and multiagent decision analysis. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 561–575. [Google Scholar] [CrossRef]
  38. Zhang, W.-R. Equilibrium energy and stability measures for bipolar decision and global regulation. Int. J. Fuzzy Syst. 2003, 5, 114–122. [Google Scholar]
  39. Zhang, W.-R. Equilibrium relations and bipolar cognitive mapping for online analytical processing with applications in international relations and strategic decision support. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2003, 33, 295–307. [Google Scholar] [CrossRef] [PubMed]
  40. Zhang, W.-R. Ground-0 Axioms vs. First Principles and Second Law: From the Geometry of Light and Logic of Photon to Mind-Light-Matter Unity-AI&QI. IEEE/CAA J. Autom. Sin. 2021, 8, 534–553. [Google Scholar] [CrossRef]
  41. Wang, J.; Gao, R.; Zheng, H.; Zhu, H.; Shi, C. SSGCNet: A Sparse Spectra Graph Convolutional Network for Epileptic EEG Signal Classification. arXiv 2022, arXiv:2203.12910. [Google Scholar] [CrossRef] [PubMed]
  42. Palcu, L.-D.; Supuran, M.; Lemnaru, C.; Dinsoreanu, M.; Potolea, R.; Muresan, R.C. Breaking the interpretability barrier—A method for interpreting deep graph convolutional models. In Proceedings of the International Workshop NFMCP in Conjunction with ECML-PKDD 2019, Wurzburg, Germany, 16 September 2019. [Google Scholar]
  43. Patil, A.G.; Li, M.; Fisher, M.; Savva, M.; Zhang, H. LayoutGMN: Neural Graph Matching for Structural Layout Similarity. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020), Seattle, WA, USA, 14–19 June 2020; pp. 11043–11052. [Google Scholar]
Figure 1. Graph representation of dynamical system in (1) by considering A as the adjacency matrix. Matrix A represents dynamics of hidden states in the piezoelectric tube actuator model, and each node corresponds to one state. In this study, the output vector, y, as well as variables C and D, have no impact on the graph shown in this figure. (Note: two decimal numbers and the self-loops were removed from the graph to make it visually easier to explore).
Figure 1. Graph representation of dynamical system in (1) by considering A as the adjacency matrix. Matrix A represents dynamics of hidden states in the piezoelectric tube actuator model, and each node corresponds to one state. In this study, the output vector, y, as well as variables C and D, have no impact on the graph shown in this figure. (Note: two decimal numbers and the self-loops were removed from the graph to make it visually easier to explore).
Dynamics 03 00013 g001
Figure 2. Self-attention operator for four sample nodes (figure adjusted from [25]). The outputs of y 1 , , y i are aggregates of interactions between inputs of x 1 , , x i and their attention scores of ω 11 , ω 12 , ω i j .
Figure 2. Self-attention operator for four sample nodes (figure adjusted from [25]). The outputs of y 1 , , y i are aggregates of interactions between inputs of x 1 , , x i and their attention scores of ω 11 , ω 12 , ω i j .
Dynamics 03 00013 g002
Figure 3. Examples of imbalanced motifs with different orders. According to the structural balance theory, loss of balance can occur when the multiplication of the signs of one cycle becomes negative.
Figure 3. Examples of imbalanced motifs with different orders. According to the structural balance theory, loss of balance can occur when the multiplication of the signs of one cycle becomes negative.
Dynamics 03 00013 g003
Figure 4. Fractal dipole. In each bifurcation point, multiple paths are branched off, and polarity is reversed.
Figure 4. Fractal dipole. In each bifurcation point, multiple paths are branched off, and polarity is reversed.
Dynamics 03 00013 g004
Figure 5. Training loss for the scenario without perturbation.
Figure 5. Training loss for the scenario without perturbation.
Dynamics 03 00013 g005
Figure 6. Training loss for the scenario with perturbation.
Figure 6. Training loss for the scenario with perturbation.
Dynamics 03 00013 g006
Figure 7. Comparing attention coefficients of each node for the scenario with and without perturbation on node 0. Nodes 2 and 6 are the ones that need more attention.
Figure 7. Comparing attention coefficients of each node for the scenario with and without perturbation on node 0. Nodes 2 and 6 are the ones that need more attention.
Dynamics 03 00013 g007
Figure 8. The effect of structural perturbation on the largest eigenvalue of the adjacency matrix. The perturbation level was initially set to 0.5 and gradually increased by step 3. Driving to zero by increasing the perturbation level only occurred for the largest eigenvalues of nodes 2 and 6.
Figure 8. The effect of structural perturbation on the largest eigenvalue of the adjacency matrix. The perturbation level was initially set to 0.5 and gradually increased by step 3. Driving to zero by increasing the perturbation level only occurred for the largest eigenvalues of nodes 2 and 6.
Dynamics 03 00013 g008aDynamics 03 00013 g008b
Figure 9. N-node motif within the network under study.
Figure 9. N-node motif within the network under study.
Dynamics 03 00013 g009
Figure 10. The contribution of each node in forming imbalanced motifs with different degrees. This contribution shows the overall influence of each node on moving the network toward an unstable state.
Figure 10. The contribution of each node in forming imbalanced motifs with different degrees. This contribution shows the overall influence of each node on moving the network toward an unstable state.
Dynamics 03 00013 g010
Figure 11. Visualization of a random walk starting from node 0. These two-step random walks are all the possible paths initiated from node 0.
Figure 11. Visualization of a random walk starting from node 0. These two-step random walks are all the possible paths initiated from node 0.
Dynamics 03 00013 g011
Figure 12. Visualization of a random walk starting from node 2.
Figure 12. Visualization of a random walk starting from node 2.
Dynamics 03 00013 g012
Figure 13. Information flow tree rooted in each node. Flabellate-shaped bifurcations observed in nodes 2 and 6. To determine whether these bifurcations form a fractal dipole, the polarity transition should be checked.
Figure 13. Information flow tree rooted in each node. Flabellate-shaped bifurcations observed in nodes 2 and 6. To determine whether these bifurcations form a fractal dipole, the polarity transition should be checked.
Dynamics 03 00013 g013aDynamics 03 00013 g013b
Figure 14. NSTC as the measure of spreading ability of each node. The negative score of the NSTC corresponds to the node where topological polarity transition occurs. Those nodes with more negative values of the NSTC have a higher ability to spread the instability across the network. A detailed explanation of the computation process for determining the spreading ability of nodes can be found in Appendix A.
Figure 14. NSTC as the measure of spreading ability of each node. The negative score of the NSTC corresponds to the node where topological polarity transition occurs. Those nodes with more negative values of the NSTC have a higher ability to spread the instability across the network. A detailed explanation of the computation process for determining the spreading ability of nodes can be found in Appendix A.
Dynamics 03 00013 g014
Table 1. The random feature set assigned to each node as attributes.
Table 1. The random feature set assigned to each node as attributes.
Node IdFeature Set
00.5−0.10.3
10.20.10.7
2−0.50.7−0.1
3−0.1−0.60.4
40.3−0.5−0.2
50.1−0.1−0.4
60.30.8−0.1
70.1−0.20.2
Table 2. Model prediction of node labels.
Table 2. Model prediction of node labels.
Node IdActual LabelPredicted Output
Without PerturbationWith Perturbation
00.010.0830
10.200.1891
20.200.1632
30.010.0823
40.010.0834
50.200.1495
60.200.1646
70.010.0837
Table 3. Score associated with imbalanced motif paths traversed from each node.
Table 3. Score associated with imbalanced motif paths traversed from each node.
NodeThree-Node MotifFour-Node MotifFive-Node MotifSix-Node MotifTotal Cost
00.00−1.22 × 1080.000.000.00
10.00−2.07 × 109−9.32 × 1013−3.82 × 10150.00
2−10,152,500−7.41 × 108−6.47 × 1013−2.65 × 10151.089 × 1015
30.00−5.12 × 108−2.33 × 1013−9.56 × 10140.00
40.00−2.32 × 1080.000.000.00
5−29,239,200−6.38 × 1070.00−3.82 × 10150.00
6−10,152,500−7.41 × 108−6.47 × 1013−2.65 × 10151.089 × 1015
7−7,309,800−5.22 × 108−2.33 × 1013−9.56 × 10144.400 × 1014
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bahador, N.; Lankarany, M. Uncovering the Origins of Instability in Dynamical Systems: How Can the Attention Mechanism Help? Dynamics 2023, 3, 214-233. https://doi.org/10.3390/dynamics3020013

AMA Style

Bahador N, Lankarany M. Uncovering the Origins of Instability in Dynamical Systems: How Can the Attention Mechanism Help? Dynamics. 2023; 3(2):214-233. https://doi.org/10.3390/dynamics3020013

Chicago/Turabian Style

Bahador, Nooshin, and Milad Lankarany. 2023. "Uncovering the Origins of Instability in Dynamical Systems: How Can the Attention Mechanism Help?" Dynamics 3, no. 2: 214-233. https://doi.org/10.3390/dynamics3020013

Article Metrics

Back to TopTop