Next Article in Journal
Time Window of Perturbation-Induced Response Triggered by Ankle Motion and Body Sway above the Ankle
Next Article in Special Issue
Structural Characteristic of the Arcuate Fasciculus in Patients with Fluent Aphasia Following Intracranial Hemorrhage: A Diffusion Tensor Tractography Study
Previous Article in Journal
Diffuse Axonal Injury in the Rat Brain: Axonal Injury and Oligodendrocyte Activity Following Rotational Injury
Previous Article in Special Issue
Image Segmentation of Brain MRI Based on LTriDP and Superpixels of Improved SLIC
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Interplay between Synaptic Strengths and Network Structure Enhances Activity Fluctuations and Information Propagation in Hierarchical Modular Networks

Department of Physics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo, CEP 14040-901 Ribeirão Preto, SP, Brazil
*
Author to whom correspondence should be addressed.
Brain Sci. 2020, 10(4), 228; https://doi.org/10.3390/brainsci10040228
Submission received: 20 February 2020 / Revised: 3 April 2020 / Accepted: 4 April 2020 / Published: 10 April 2020
(This article belongs to the Special Issue Human Brain Dynamics: Latest Advances and Prospects)

Abstract

:
In network models of spiking neurons, the joint impact of network structure and synaptic parameters on activity propagation is still an open problem. Here, we use an information-theoretical approach to investigate activity propagation in spiking networks with a hierarchical modular topology. We observe that optimized pairwise information propagation emerges due to the increase of either (i) the global synaptic strength parameter or (ii) the number of modules in the network, while the network size remains constant. At the population level, information propagation of activity among adjacent modules is enhanced as the number of modules increases until a maximum value is reached and then decreases, showing that there is an optimal interplay between synaptic strength and modularity for population information flow. This is in contrast to information propagation evaluated among pairs of neurons, which attains maximum value at the maximum values of these two parameter ranges. By examining the network behavior under the increase of synaptic strength and the number of modules, we find that these increases are associated with two different effects: (i) the increase of autocorrelations among individual neurons and (ii) the increase of cross-correlations among pairs of neurons. The second effect is associated with better information propagation in the network. Our results suggest roles that link topological features and synaptic strength levels to the transmission of information in cortical networks.

Graphical Abstract

1. Introduction

Neurons in the cerebral cortex are interconnected according to selective, i.e., non-random, patterns of connectivity. Different experimental procedures are advancing the knowledge on these intricate connectivity patterns (see, e.g., [1,2,3,4,5,6,7,8]). With the help of computational models, the improved connectivity maps are allowing the realization of the long-standing goal of understanding the interplay between structure and dynamics in cortical networks [9,10,11]. Yet, it is an open question whether the evolutionary process that generated such a complex cortical wiring is the result of a selection mechanism for optimized region-to-region communication or some higher order function [12,13,14].
Connectivity may follow different classification schemes beyond physical (structural) connectivity per se. Functional and effective connectivity, which respectively relate to statistical dependencies among neural activity in different brain regions and the causal influence of one brain region over another, are widely used, but captured by different procedures [15,16]. Independently of the connectivity scheme used, experimental studies generally agree that cortical networks have a hierarchical modular architecture [17,18,19,20,21,22]. Previous works have shown that this type of architecture allows long-lived self-sustained activity states in spiking network models with characteristics akin to cortical spontaneous activity patterns [23,24,25]. However, these studies have not addressed the effect of the hierarchical modular architecture on information flow in the network.
Other studies based on network models with non-hierarchical modular architectures have investigated the information processing capability of the network by playing with other features. Examples are the strength of the global synaptic coupling parameter in random networks with sparse connectivity [26]; the degree of synchronization among pools of excitatory and inhibitory neurons connected by feedback loops [27]; and in the context of reservoir computing [28], the community structure within the reservoir [29] and the presence of topographically structured feed-forward connections within the reservoir [30].
The question of how topology is connected to information transmission is appealing especially due to recent anatomical developments [31], where it was shown that pathways of information flow in the Drosophila connectome can be predicted from the network structure, or more theoretically oriented ones [29], where the authors showed that an intermediate level of modularity in artificial recurrent neural networks is optimal for memory performance. Indeed, there is a general agreement that architecture shapes communication [30].
In this work, we tackle the problem of information transmission in hierarchical modular networks of spiking neurons. We study networks of different levels of hierarchical organization, which determines the number of modules and overall strength of synaptic coupling. Using information-theoretical measures, we show that information transmission in these networks has different dependencies on the level of hierarchy and the synaptic coupling strength. By analyzing information transmission between neurons and between modules, we show that the latter is not straightforwardly predictable from the former, disclosing the complexity behind communication dynamics in hierarchical modular networks. In particular, we find that there is an intermediate range of the number of modules (neither too few nor too many) for which information transmission between modules is maximal. This “optimality” phenomenon is not observed for information transmission between neurons. Our results underscore the importance of the hierarchical modular architecture of the cortex and suggest an interplay between network structure and synaptic strength with consequences for cortical information transmission.

2. Methods

2.1. Neuron Model

We used the leaky integrate-and-fire (LIF) neuron model [32]:
τ m v ˙ j = v j + R I j , loc + I j , ext ,
where v j is the membrane potential of neuron j, R is the membrane resistance, and τ m is the membrane time constant in ms. The synaptic currents arriving at neuron j are represented by I j , loc , which represents the “local” input, and I j , ext , which represents the external input received by neuron j. This model obeys a fire-and-reset rule so that when the voltage reaches the threshold v th , a spike is considered to be emitted, and the voltage is reset to the reset potential v r . We also consider a refractory period of duration τ ref after a spike for which the neuron is unable to respond.
Upon arrival of an excitatory input to neuron j, R I j , loc is incremented by J (in mV), and upon arrival of an inhibitory input, it is incremented by g J , where g is the relative inhibitory synaptic strength parameter. Synaptic communication has a delay of τ D , which is the same for all neuron pairs. The single neuron and network parameters are shown in Table 1.

2.2. Network

The hierarchical modular networks used here were constructed as described below [23,24,25]. The construction algorithm resulted in networks with a hierarchical modular structure akin to those observed in cortical networks [17,21,33,34]. We started with a random network of N = 2 17 = 131,072 neurons connected with connectivity ϵ = 0.01 . The parameter ϵ is the probability of a synaptic connection between any pair of neurons in the network. The ratio of excitatory to inhibitory neurons is 4:1, which is based on experimental evidence that approximately 20% of cortical neurons are inhibitory [35,36,37,38]. This network has only one module and will be called a network of hierarchical level H = 0. Networks of higher hierarchical levels are generated by the following algorithm:
  • Randomly divide each module of the network into two modules of equal size;
  • With probability R ex / in , replace each intermodular connection i j by a new connection between i and k where k is a randomly chosen neuron from (the same module as i;
  • Recursively apply Steps 1 and 2 to build networks of higher (H = 2, 3) hierarchical levels. A network with hierarchical level H has 2 H modules.
The rebating probabilities have values R ex = 0.9 and R in = 1 , so that the intermodular connections are exclusively excitatory.
Some examples of hierarchical modular networks are shown in Figure 1. They allow a visualization of the hierarchical structure of the network: as H increases, the number of modules increases, and modules are encapsulated in groups of modules. Connections between modules that are “topologically” closer are denser than between more topologically distant ones. Inhibitory connections occur strictly within modules (are “local”), while excitatory connections can be both local and long-range. For purposes that will be described below, we introduced an arbitrary ordering scheme for the modules (see the bottom of Figure 1).

2.3. Simulation Protocol

We study hierarchical modular networks with hierarchical level H in the range [0,9], where H = 0 corresponds to a network with an Erdos–Rényi topology (see above). For each H level, the network is submitted to the same stimulation protocol, aimed at simulating spontaneous activity in the network. The stimulation protocol consists of applying a constant external input R I ext = 30 mV to all neurons of the network for the simulation time T = 2 s.
For each H level, the above stimulation protocol was repeated for coupling strengths J in the range [0,1] with increments of 0.05. The value of g was fixed at five for all simulations. The network activity in each simulation was characterized by the statistical measures described below.

2.4. Statistics

The spike-train of neuron j is given by the sum of delta functions:
x j ( t ) = i δ ( t t i f ) ,
where t i f is the time of the i th spike of neuron j. From the spike-train, one can obtain the firing rate of neuron j over a time interval T as ν j = x j ( t ) = n j / T = T x j ( t ) d t / T .
The network time-dependent firing rate (activity) of a population of N neurons is defined as:
r ( t ; Δ t ) = 1 N Δ t j = 1 N t t + Δ t x j ( t ) d t ,
where the time window is fixed at Δ t = 1 ms. For simplicity, below, we will denote this time-dependent firing rate by r ( t ) . The average of r ( t ) over a time interval T will be indicated here by ν .
The power spectrum of x j ( t ) is defined as:
S x x , j ( f ) = x ˜ j ( f ) x ˜ j * ( f ) T ,
where T is the simulation time, x ˜ j ( f ) is the Fourier transform of the j th spike-train given by x ˜ j ( f ) = 0 T d t e 2 π i f t x j ( t ) , and x ˜ j * ( f ) is its complex conjugate.
In general, we considered the averaged spike-train power spectrum over a number K of neurons:
S ¯ x x ( f ) = 1 K j K S x x , j ( f ) .
To evaluate the spike-train’s long-term variability, we used the Fano factor ( F F ),
F F = Δ n 2 / n ,
where n is the spike count defined as n = 0 T x ( t ) d t for a given time window T. A large value of F F indicates an enhancement of slow fluctuations. In our simulations, we extracted F F from S ¯ x x ( f ) since both were related by the equation: lim f 0 S ¯ x x ( f ) = ν × F F . From S ¯ x x ( f ) , we also extracted the mean firing-rate of the network by the relationship: lim f S ¯ x x ( f ) = ν (cf. [39,40]).
For spike-trains, we computed the autocorrelation function:
c x x ( τ ) = 1 K j K x j ( t ) x j ( t + τ ) x j ( t ) x j ( t + τ ) ,
which in our work was always an average over K = 10,000 randomly chosen neurons and normalized by c x x ( 0 ) . Similarly, the cross-correlation function c x y ( τ ) is computed by taking K = 10,000 randomly chosen pairs of spike-trains x ( t ) and y ( t ) .
Following [40,41,42], we also extracted the correlation time τ c from S ¯ x x ( f ) by means of the Parseval theorem applied to the integral over the squared and normalized correlation function:
τ c = + d τ c ^ ( τ ) c ^ ( 0 ) 2 = + d f ( S ¯ x x ( f ) ν ) 2 ν 4 ,
where c ^ ( τ ) denotes the continuous part of the spike-train’s correlation function,
c ^ ( τ ) = x ( t ) x ( t + τ ) x ( t ) x ( t + τ ) correlation function c ( τ ) ν δ ( τ ) .
To measure information flow in the network, we made use of the transfer entropy ( T E ) [43]. This quantity measures how much the predictability of the spike-train x ( t ) of a given neuron is improved if we have knowledge about the spike-train y ( t ) of a different neuron [44] (for simplicity, we denote the spike-trains at a given time t by x t and y t ).
Given that the measure is asymmetric, it also conveys a directional sense, i.e., whether information is flowing from x to y or vice versa.
Here, we used a version of T E called delayed transfer entropy [45], which is given by:
T E y x ( d ) = p ( x t + 1 + d , x t + d , y t ) log 2 p ( x t + 1 + d , x t + d , y t ) p ( y t ) p ( y t + 1 , y t ) p ( x t , y t ) .
Equation (10) refers to the situation when a presynaptic neuron y sends signals to a postsynaptic neuron x. In this case, T E y x ( d ) is obtained by taking four spike-trains: y t , x t , the spike-train of the receiving neuron shifted by a delay d ( x t + d ) and the spike-train of the receiving neuron shifted by delay d + 1 ( x t + d + 1 ). From these spike-trains, we determined the probability p ( y t ) , the joint probabilities p ( y t + 1 , y t ) , p ( x t , y t ) , and p ( x t + 1 + d , x t + d , y t ) , which were used to calculate T E y x ( d ) . In Equation (10), the summation is taken over the set of all possible combinations of symbols for the spike-trains.
Since the value of the spike-train in each time step is either 0 (for silence) or 1 (for a spike), for the joint probabilities p ( x t , y t ) , we have 2 2 = 4 combinations, and for p ( x t + 1 + d , x t + d , y t ) , we have 2 3 = 8 combinations. In Figure 2, we summarize the procedure to measure T E y x explained above. In Figure 2a, the spike-trains were made in such a way that, whereas T E y x is maximum for d = 3 , T E x y is maximum for d = 2 . To illustrate that T E is maximized when the delay is equal to the time delay of the connection between two neurons and that this measure is asymmetric ( T E y x T E x y ), in Figure 2c, we plot T E y x and T E x y for a simple network of two coupled neurons. The system was artificially set up so that x fires three time steps after y and y fires two time steps after x. The delay for which T E is maximum can be interpreted not only as the time that information takes to go from y to x, but also as the time delay of a possible functional connection between the pair of neurons [46]. In fact, many studies use this approach to determine and retrieve the connectivity map of a network [47].
For each combination of the parameters { J , H } , we compute the network T E by selecting K = 10,000 randomly chosen combinations of neuron pairs (neuron y and neuron x) without repetition. For each pair, T E is measured as in Equation (10); since the communication delay is unknown, we measure T E for delays in the range d [ 155 ; 300 ] bins, with a bin size of 0.1 ms, and use the maximum T E in this range [48]. The choice of range for bins was made taking into consideration the synaptic delay time τ D and the membrane time constant τ m (which characterizes the voltage rise time towards the spike threshold). In the end, we extract the average T E ,
T E = 1 K j K max { T E j ( d ) } ,
where T E j is the transfer entropy for the j th pair of neurons. Considering that we used 100 different combinations of { J , H } for 10 different initial conditions (yielding 1000 networks) and that we used 10,000 neuron pairs over a range of 145 delays, there were at least 1.45 billion computations to obtain T E in this work. Thus, the computation of T E demanded extensive parallel computation.
The above definition of T E is valid for spike-trains of neurons pairs. It will be called here “microscopic” T E , or simply T E . We introduce here a second definition of T E , based on firing rates (activities) of pairs of modules, which will be used to measure information flow at the macroscopic level. We will refer to this “macroscopic” T E as T E ( H ) . To calculate T E ( H ) for a given hierarchical level H, we randomly selected 500 pairs of modules and measured the transfer entropy for each pair ( i , j ) using Equation (10) with d = 0 and x and y being the activities r i ( t ) and r j ( t ) of the two modules, respectively. The activity of a module is calculated as in Equation (3) with N equal to the number of neurons in the module. Then, we take the average over the 500 pairs of modules,
T E ( H ) = 1 K j = 1 K T E j ( H )
where j is the index of the module pair, T E j ( H ) is the transfer entropy for the j th pair, and K = 500 . For networks with less than 500 combinations of modules, we compute T E ( H ) as above, but taking the average over the smaller number of module pairs. Since the activity of a module is continuous, we estimated the joint probabilities in Equation (10) using a Gaussian kernel density estimator with bandwidth 0.3 [43].
To evaluate statistical dependency among modules, we extracted the mutual information [47] among pairs of adjacent modules using a procedure similar to the one described above for T E ( H ) . The mutual information between two variables x and y is given by:
M I ( x ; y ) = x x t y y t p ( x , y ) log 2 p ( x , y ) p ( x ) p ( y ) .
For a given hierarchical level, we selected the 2 H pairs of adjacent modules { ( 1 , 2 ) , ( 2 , 3 ) , , ( 2 H 1 , 2 H ) , ( 2 H , 1 ) } , where the numbering scheme is the one introduced in Figure 1. Then, the mean mutual information over the set of 2 H adjacent modules is given by M I ( H ) = k = 1 2 H M I k / 2 H , where M I k is the mutual information between the k th pair of adjacent modules as defined above.
All neuron and network models were implemented using the Brian 2 neurosimulator [49]. Statistical and information theoretical analyses were implemented by self-developed Python packages, which were made available at GitHub [50]. Network visualization was made with the help of the Python package NetworkX. Simulations were performed with the use of the NeuroMat (neuromat.numec.prp.usp.br/) cluster.

3. Results

3.1. Information Transfer is Enhanced When Both Modularity and Synaptic Strength Increase

As described in the Methods, for each hierarchical level H (in the range from zero to nine), we ran simulations of the network with coupling strength J in the range [0.1, 0.15, …, 1] (in millivolts) and g = 5 . In Figure 3, we show the raster plots and corresponding firing rates for three H values ( H = 0 , which corresponds to an Erdős-Rényi graph; H = 7 and H = 9 ) and two J values ( J = 0.2 mV and J = 0.8 mV).
The network with H = 0 can have two types of asynchronous activity. In the case of week coupling (cf. H = 0 and J = 0.2 mV in Figure 3), neurons fire irregularly, and no synchronous behavior is observed. In addition, the population firing rate is low (the average value of r ( t ) for J = 0.2 mV is ν = 17.6 ± 5.6 Hz, where the ± sign means standard deviation) and homogeneous. As the synaptic strength increases (cf. H = 0 and J = 0.8 mV in Figure 3), the activity changes to a more heterogeneous behavior where single neurons fire in bursts of high activity interspersed with short periods of low activity, and the network firing rate displays a less homogeneous behavior with some irregular fluctuations. The mean firing rate also increases ( ν = 53.1 ± 12.5 Hz for J = 0.8 mV). Evidence of the fluctuations that appear when J is increased is the growth of the standard deviation of r ( t ) , which more than doubles when J changes from 0.2 mV to 0.8 mV.
In the second and third columns of Figure 3, we compare activity dynamics for hierarchical levels H = 7 and H = 9 and synaptic strengths J = 0.2 mV and J = 0.8 mV. For both hierarchical levels, heterogeneous spiking behavior and modularity effects appear already for low synaptic strength (cf. J = 0.2 mV) and become more pronounced as J increases (cf. J = 0.8 mV). The population firing rate also is very sensitive to increases in both J and H. For fixed J, the firing rate increases with H, and for fixed H, the firing rate increases with J. For quantitative comparison, the average population firing rate values are: (i) ( H = 7 , J = 0.2 mV): ν = 30.2 ± 7.7 Hz; (ii) ( H = 7 , J = 0.8 mV): ν = 102.9 ± 15.4 Hz; (iii) ( H = 9 , J = 0.2 mV): ν = 129.3 ± 12.1 Hz; and (iv) ( H = 9 , J = 0.8 mV): ν = 187.8 ± 16.6 Hz. In addition to that, as H increases, modules begin to act more individually as can be seen in the different spike patterns of each module (observe the horizontal bands in alternating gray and black colors for panels with H = 7 and 9). In the following, we will show that both a high hierarchical level H and a high synaptic strength J also increase information transmission in the network.
In Figure 4a–e, we present extended statistics that shed light on the effects of increasing J and H. Analysis of the spike-train power spectra in Figure 4a,b shows that an increase of either J or H leads to a build-up of slow fluctuations in the network. However, the effect is more pronounced for an increase in J than for an increase in H. For example, for fixed H = 0 , a change in J from 0.2 mV to 0.8 mV produces increases in power at low frequencies of about two orders of magnitude, whereas for fixed J = 0.2 mV, a change in H from zero to nine produces power increases at low-frequencies of about one order of magnitude. Overall, the spectral characteristics are similar to the ones of cortical neurons [51].
For low values of H, typically H < 7 , the mean network firing rate ν displays a non-monotonic behavior as a function of J. It initially decreases towards a minimum and then increases as shown in Figure 4c (curves in green and red). The minimum marks the transition from the asynchronous homogeneous behavior to the asynchronous heterogeneous behavior (compare the raster plots in Figure 3 for H = 0 ). For H 7 , the minimum disappears, and the curve of ν versus J grows monotonically towards a saturation firing rate (purple and blue curves in Figure 4c).
The Fano factor F F , on the other hand, grows with J for all hierarchical levels H. What changes is the growth rate, which is much higher for low H than for high H (again, the transition point is around H = 7 ). For low H, F F starts at values well below one (indicating low spike variability) for low synaptic strengths and rises steeply to values about two orders of magnitude higher as the synaptic strength increases, indicating a rapid increment in spike variability (see the green and red curves in Figure 4d). The F F growth is not so pronounced when H 7 , with variations of one order of magnitude or less (purple and blue curves in Figure 4d). Interestingly, the asymptotic F F value for large J is lower for H = 9 than for H = 8 , suggesting that there is a limiting level of modularity beyond which spike variability and heterogeneity do not grow.
The behavior of the correlation time τ c as a function of J is similar to the one of the firing rate ν . It decreases to a minimum, then increases with J when H < 7 , and grows monotonically with J for H 7 (Figure 4e). Overall, the behavior of ν , F F , and τ c reflects the amplification of slow fluctuations and increments of network firing rate and spike variability provoked by topological (introduction of modularity) and synaptic strength changes in the network and is comparable with the behavior of these variables for random networks with fixed in-degrees reported elsewhere [40,42].
In order to characterize information flow in the network, we show in Figure 4f the behavior of T E in the parameter space spanned by J and H (each point corresponds to an average over 10 different initial conditions). For very low values of synaptic coupling ( J 0.2 ), the effect of modularity on T E is not very significant until H 6 , as can be seen from the vertical arrangement of shaded stripes in the diagram. Then, for intermediate coupling strengths ( 0.2 J 0.5 ), the effect of modularity on T E becomes significant (stripes are predominantly horizontal), and for strong coupling ( J 0.5 ), the effect is again reduced (stripes are vertically arranged again). The exception is when the number of modules is very high ( H 8 ), in which case T E is insensitive to coupling strength. Regarding the behavior of T E with respect to changes in J and H, in the region of the diagram where T E is more sensitive to J (the region with H 5 ), T E decreases towards a minimum as J grows from 0.1 to 0.3 and then increases toward high values as J grows from 0.3 to one. This behavior is similar to the one for τ c depicted in Figure 4e. The maximum value of T E in this region occurs for strong coupling ( J = 1 ) and either no modules ( H = 0 ) or only two modules ( H = 1 ). In the region of the diagram where the effect of modularity is important ( H 5 ), T E tends to grow with H. The maximum value of T E is attained for the largest number of modules considered ( H = 9 ), and this value is comparable to the maximum of T E in the region where T E is more sensitive to J.
Results in this section show that both slow fluctuations and information transmission are largely enhanced when J and H grow. We hypothesize that, as J and H increase, the modules start to act as single units. For example, in Figure 3, the modules in networks with high J and H exhibit different individual behavior and can be identified visually. All modules display bursts of intense activity intercalated with periods of low activity, but each module has its own pattern of burst/quiescence alternations, which does not coincide with the patterns of the others. This is suggestive that when both synaptic coupling and the number of modules are high, modules behave as independent functional units. In the next section, we investigate this suggestion by studying the auto- and cross-correlations of the neuronal spike-trains.

3.2. Effects of J and H on the Autocorrelation and Cross-Correlation of Single-Neuron Spike-Trains

In this section, we investigate the autocorrelation and cross-correlation of the spike-trains of single neurons in order to obtain a better understanding of the individual properties of neurons when slow fluctuations and information transmission are incremented due to increases in the synaptic coupling strength J and/or the hierarchical level H.
In Figure 5, we show the autocorrelation c x x ( τ ) and the cross-correlation c x y ( τ ) , as defined in the Methods, for selected pairs of parameters ( J , H ) taken from the sets J = { 0.2 , 0.4 , 0.6 , 0.8 } and H = { 0 , 2 , 4 , 6 , 8 } . When the topology of the network is not modular (bottom row of Figure 5), the increase in the synaptic coupling J produces an increase in the spike-train autocorrelation, but has almost no effect on the spike-train cross-correlation. This reflects the effect of J in enhancing slow fluctuations while keeping the network activity asynchronous, as observed before (cf. the first column of the raster plots in Figure 3 and the curves for H = 0 (green curves) in Figure 4a–e). In other words, in a non-modular network, when the synaptic coupling increases, the spikes of an individual neuron tend to become more correlated over short times, but behave independently of the spikes of other neurons.
In contrast to this situation, when the number of modules is high (upper rows of Figure 5), the increment in J affects both the spike-train autocorrelation and cross-correlation. The cross-correlation over a short-time increases when the synaptic coupling is strong, indicating a weak, but non-negligible degree of functional coupling between neurons. In addition, the autocorrelation also increases with J, but now, this increase is less pronounced than when H = 0 .
The different behaviors of the spike-train auto- and cross-correlations upon the increment in J between networks with non-modular and modular topologies hints that a more complex activity pattern emerges at the population level when hierarchical modularity is introduced in the network, which was not present when H = 0 . Moreover, the microscopic T E measured used in the previous section was not able to capture this difference: in the diagram of Figure 4f, the regions defined by ( H = 0 , J 0.9 ) and ( H = 0.9 , J 0.9 ) have approximately the same values of T E . The above results suggest that the introduction of a hierarchical modular topology produces some form of population communication (reflected in the increase of spike-train cross-correlation) that was not present in the network with non-modular topology. Since the T E measure was not sensitive to this finding, we will use the macroscopic T E ( T E ( H ) ) introduced in the Methods to test whether it can be helpful in this case. This is the subject of the next section.
Why does the spike-train cross-correlation increase with the hierarchical level? In order to understand this, below, we derive equations to investigate how the internal (i.e., intramodular) and external (i.e., intermodular) communication is affected by the hierarchical level H. We focus on the average number of connections as they are rewired at any new increment in H. In the calculations below, we will not make any distinction between excitatory/inhibitory connections, thus keeping everything in general terms.
Let us start with the network where H = 0 . For large N, the expected number of connections to a neuron that come from inside the single module is n in ( H = 0 ) = N ϵ , where the superscript indicates the hierarchical level H = 0 .
Now, when H = 1 , the rewiring algorithm tells us that one should divide the network and rewire its connections, which means that the expected number of connections to a neuron from the same module where it is located is half of the previous value plus the expected number of connections to the other module that are cut and rewired back to the neuron (we will assume, for simplicity, that the rewiring probability is R for all connections):
n in ( H = 1 ) = n in ( H = 0 ) 2 + n in ( H = 0 ) 2 × R .
Equation (14) gives the average number of connections to a neuron that come from inside the same module. In a similar way, the average number of connections that come from outside the module to the neuron is given by:
n out ( H = 1 ) = n in ( H = 0 ) n in ( H = 1 ) = N ϵ n in ( H = 1 ) .
Note that we can re-write Equation (15) for any hierarchical level H > 0 because the expected number of connections from outside a module will always be the expected number of connections at H = 0 minus the expected number of connections from inside the module after rewiring:
n out ( H ) = N ϵ n in ( H ) .
For the hierarchical level H = 2 , we follow the same procedure used to derive Equation (14) and obtain the expression for n in ( H = 2 ) , but now considering that the connections from outside the module when H = 1 are also rewired:
n in ( H = 2 ) = n in ( H = 1 ) 2 + n in ( H = 1 ) 2 × R + n out ( H = 1 ) × R = n in ( H = 1 ) 2 ( 1 R ) + N ϵ × R .
For hierarchical levels H > 1 , we recursively apply the above equations and obtain the expression:
n in ( H + 1 ) = N ϵ 2 1 R 2 H + 2 R k = 0 H 1 R 2 k .
In summary, Equation (18) gives the expected number of connections to a neuron that come from its own module at the hierarchical level H > 1 , and Equation (16) gives the expected number of connections to a neuron that come from outside its module for any H > 0 .
It is interesting to note that the rewiring procedure is limited with respect to n in , so that lim H n in = 2 N R ϵ R + 1 . This means that while increasing H, the average number of connections to a neuron that come from inside the same module reaches a fixed value, no matter how small is the module. This fact is important because it shows that the average density of connections ( ϵ in = ( 2 H × n in ) / N ) in a module increases dramatically when such a limit is achieved since the number of neurons within a module decreases as H increases. Concomitantly, n out is also limited since it is directly related to n in .
The set of Equations (14)–(18) can elucidate why cross-correlations increase in a module as H increases. In Figure 6a, we show how the value of ϵ in changes as a function of the hierarchical level H. One can see that connections within a module grow exponentially with H. As ϵ in exponentially increases, a higher degree of synchronous activity in the network is expected, and thus, correspondingly higher values of spike-train cross-correlations are also expected. In fact, it is expected that a random rewiring of connections, which is equal in nature to random occurrences of events in a Poisson process, would lead to an exponential growth of spike-train cross-correlations.
To check how slow fluctuations build up with increasing connectivity within a module, we simulated a network with N = 2 14 neurons and H = 0 (representing a single module) with varying values of ϵ . The spike-train power spectra of the network for the different values of ϵ are shown in Figure 6b. One can see that slow fluctuations start to build up as ϵ increases (note the initial values on the left-hand side of the plots).
Results in this section show how the single-neuron behavior is affected by increases of J and H. Some phenomena, like the enhancement of information transfer and the buildup of slow-fluctuations, emerge and display similar properties when either J and H are large. However, other measures like the spike-train autocorrelation and cross-correlation behave in different ways when either J or H increase. In particular, the results suggest that information flow at the population level is more robust in the presence of a hierarchical and modular network. To understand better how information flow at the population level is affected when the hierarchical level is increased, in the next section, we study the effect of increasing J and H on the macroscopic T E introduced in the Methods.

3.3. Information Flow at the Population Level

In this section, we focus on how information flows at the macroscopic scale of modules in the network. The algorithm used to build hierarchical modular topologies allows gradually observing how different measures increase or decrease with the parameter H. We have already shown that H and J affect differently the spike-train auto- and cross-correlations, and in this section, we are interested in how information flow measured at the modular level behaves as J and H vary. Is the behavior different or similar to the one seen for information flow at the single-neuron level?
First, we recall Figure 4f, where it can be observed that increasing H causes an enhancement in information flow at the microscopic level ( T E ). This can be interpreted as an increase in the “usefulness” of the knowledge of the spike-train of a give neuron in predicting the future behavior of the spike-train of a different neuron. Here, considering the hypothesis that communication can take place not only at the level of the single units of the network (“microscopic” level), but also at the level of the modules in which the network is organized (“macroscopic” level), we will evaluate information flow among modules using the measure T E ( H ) introduced in the Methods section.
In Figure 7a, we can observe that the communication among modules is indeed very different from the one among neurons shown in Figure 4f. The most compelling difference is the existence of an intermediate range of H values (around H = 6 ) at which T E is maximal. Furthermore, above and below this range, there are two contrasting behaviors: for low H ( H 4 ), T E monotonically decays with J as J increases; for high H ( H 7 ), this behavior is somewhat mirror-inverted, and T E monotonically increases with J.
The boxplots in the inset of Figure 7a, which display the distributions of T E ( H ) for different H values and the entire range of J values, show that H = 6 has the highest mean and the lowest variance of T E ( H ) . This clearly shows that H = 6 is an optimized point for information transmission among modules.
The results in Figure 7a indicate that a form of modular communication takes place in the hierarchical modular networks. There is an “optimal” level of hierarchical modular organization (neither the lowest nor the highest level) at which the macroscopic T E is maximal. Moreover, at this “optimal” H level, the macroscopic T E is relatively insensitive to changes in the synaptic strength J. Only when H is above or below the optimal value, the communication at the modular level is significantly influenced by the synaptic strength J.
The results of the previous two sections suggest that as H increases, the modules start to behave as individual functional units. To test this hypothesis, we computed the mutual information among modules, M I ( H ) . This metric can be interpreted as a measure of statistical dependence among the considered elements [47]. In Figure 7b (neglecting the behavior for H 4 ), one can see that as H increases, M I ( H ) decreases, indicating that the modules act more independently as the hierarchical modular level increases. Interestingly, Figure 7b also shows that for intermediate H values ( 5 H 7 ), the synaptic strength J plays a role in the statistical dependence among modules. Within this intermediate range of H values, M I ( H ) increases with J, indicating that the modules become less statistically independent as the synaptic strength increases. Since the microscopic parameter J is associated with the emergence of slow fluctuations in the network activity, this points to a link between slow activity fluctuations and statistical dependency among modules.

4. Discussion

An important problem in computational neuroscience is the investigation of different dynamics displayed by networks of spiking neurons [23,52,53,54] and in particular the ones that enhance information processing such as dynamics with slow fluctuations [26,42,55]. Region-to-region communication characteristics and how they interact with the topological features of the network are also of great interest because they shed light on the relationship between topology and dynamics [56,57]. Here, we addressed this problem by investigating networks with a hierarchical modular topology, which display generic features of cortical networks [17,20,24], and how the topological structure affects information flux.
We constructed large networks of spiking neurons with variable levels of (i) hierarchy and modularity and (ii) synaptic strength. By extracting information-theoretic measures (microscopic and macroscopic T E and M I ), we were able to observe that both information propagation and slow activity fluctuations could be optimized by combining (i) and (ii). Our goal was to analyze how the interplay of intrinsic neuronal parameters and topological features influenced activity propagation and how this was related to different spatial scales (the “microscopic” scale of single neurons and the “macroscopic” scale of neuronal modules).
More specifically, we started with a comparison of spiking activity characteristics between networks with Erdős-Rényi and hierarchical modular topologies. The activities of the networks with the two topologies were characterized in terms of their variation with the synaptic strength J. Since the relative inhibitory synaptic strength g was fixed to five, previous works have already shown that the activity displayed by these networks is of the type known as “asynchronous irregular” (AI) [26,40,42]. Indeed, we observed AI-like activity in our networks. In networks with AI activity, neurons fire without correlation, and the increase of J to high values creates a second type of AI activity, called “heterogeneous” AI [26], which is characterized by the emergence of slow fluctuations [40,42]. The heterogeneous AI regime has bursts of spikes intercalated with periods of silence. We observed this pattern again in our study, but for high values of the hierarchical level H, the heterogeneous behavior appeared even at low J. Moreover, when H was high, the different modules displayed heterogeneous spiking patterns, i.e., they behaved as units independent of each other.
Then, we moved on to a study of information transmission in the hierarchical modular networks as a function of the topological parameter H and the microscopic synaptic strength parameter J. To investigate possible different ways of communication in the network, namely at the microscopic level of neurons and at the macroscopic level of modules, we used two different measures of T E : T E and T E ( H ) . The microscopic measure T E was based on the neuronal spike-trains, and the macroscopic measure T E ( H ) was based on the average firing rates (activities) of the modules.
Let us call the type of communication at the microscopic level C micro and the type of communication at the macroscopic level C macro . Then, when exploring C micro and C macro , we had two possibilities: (i) T E in C macro is predictable from the measurement of T E in C micro (and vice versa); or (ii) communication at these two scales is completely different. If Possibility (i) were true, we would expect that the two measures, T E and T E ( H ) , would display similar properties when observed in the J-H diagram. In such a case, communication in the network would be independent of the two scales, and bridging between C micro and C macro would be directly possible. On the other hand, if Possibility (ii) were true, knowledge of either T E or T E ( H ) could not be used to explain the other measure because they would be capturing different things.
Our study showed that Possibility (ii) is true, i.e., C micro and C macro are different. The behavior of T E in the J-H diagram shows that there are two regions where C micro is maximal: the line on top of the diagram where H = 9 (independent of J) and the bottom right-hand corner where H 1 and J 1 . The J-H diagram for T E ( H ) shows an opposite situation: C macro is maximal along the line given by H = 6 and is very low at the regions where C micro is maximal. The main finding of our study was that there was an intermediate value of the hierarchical level (within the range of H values considered) for which C macro was maximal. This “optimal” type of behavior was not found when we studied C micro .
As an attempt to explain the observed behavior of C micro and C macro , we investigated two other types of measures. In the case of C micro , we used the spike-train auto- and cross-correlations. In the case of C macro , since our hypothesis was that the observed behavior was due to the emergence of independent modules, we used the mutual information among modules, M I ( H ) .
As noted above, in the J-H diagram for T E , there are two regions where T E is maximal: the upper right-hand corner where both H and J are highest and the lower right-hand corner where H = 0 and J = 1 . The observation of T E alone is not enough to reveal the mechanisms underlying these seemingly similar behaviors. The use of the spike-train auto- and cross-correlations helps in this disambiguation. The high T E for a non-modular network with high J is due to the increase in the spike-train autocorrelation with the increase of J, while the high T E for a network with high J and many modules is due to the increase in the spike-train cross-correlation with the increase of H.
Interpreting M I ( H ) as a measure of independence among modules (high M I ( H ) meaning higher relative dependence and low M I ( H ) meaning lower relative independence), our results (cf. Figure 7b) showed that modules became relatively more independent as H grew (neglecting situations with H 4 ). The situation with the highest level of modular independence was the one with the highest H ( H = 9 ), and the situation with the lowest level of modular independence was the one with the lowest H ( H = 5 ). Combining this result with the results shown in the diagram for T E ( H ) in Figure 7a, one sees that the scenario with maximum C macro occurred in a situation where modules were neither too independent of nor too dependent on each other. If all modules were completely independent, they would act as autonomous units, and T E ( H ) would be near zero; if the modules were very interdependent, they would act more or less as a single unit, and T E ( H ) also would be low (knowledge of the activity of a single module would be enough to infer the activities of all the other modules). Therefore, the optimal situation for information transfer among modules as measured by T E ( H ) was the situation in which modules were in an intermediate position between total autonomy and total interdependence. This corresponded to the case with H = 6 .
The optimal value H = 6 did not mean that there was something special about the number six. Our study only showed that the modular T E was maximized at an intermediate value in the range of H values used, which in our case was [ 0 , 9 ] because of the number N of neurons chosen. We predict that a similar study with twice as many neurons, which would allow H values close to 20, would result in an optimal H value higher than six.
Previous studies concentrated either on other features that were enhanced by topological characteristics or on different types of activity regimes. For instance, it has been shown that hierarchical modular networks are advantageous for long-lived self-sustained activity [24,25] and can present critical behavior [23] that is related to optimal dynamic range [58]. Complementary to that, it has been shown that augmentation of the synaptic strength generates different versions of the standard AI activity, which may favor information processing [26]. In our work, we showed that hierarchical modularity also affected information transmission. In particular, our results suggested that there may be a transition point in the level of hierarchical modular organization that endows the network with a high level of macroscopic communication independently of the synaptic strength.
We observed that slow activity fluctuations increased with both the hierarchical modular level H and the synaptic strength J. However, the spike-train cross-correlation variation was more sensitive to J than to H. Recent studies investigated the influence of correlations in neuronal activity over information transmission [59,60,61]. Here, the used transfer entropy measure undoubtedly showed an increase in the information propagation at the single-neuron level at high hierarchical modular levels, which we showed to be related to the increase of the spike-train cross-correlation through the rewiring process.
As one of the objectives of our work was to understand the benefits of a hierarchical modular structure for information transmission, we compared the microscopic T E , based on the spike-trains of pairs of neurons, with the macroscopic T E , based on the firing rates of pairs of modules. Our results suggested that networks with a hierarchical modular structure may be optimized for communication at the macroscopic level, i.e., at the level of modules instead of single neurons. A speculative interpretation of this is that signals produced at the level of modules (firing rates) are more robust and less prone to deleterious noise effects than signals produced at the level of single neurons (isolated spikes).
In addition to that, our result that modules started to act more individually as the hierarchical modular level increased could be interpreted in line with suggestions made elsewhere that the activity in modular networks provides functional segregation and integration [23,62], which is certainly an advantage in terms of memory storage.
One final point concerning the difference between communication at micro and macro scales is worth mentioning. For communication at the level of spike-trains, the information flow always increases with J, which would imply a high metabolic cost for synaptic communication [63,64]. On the other hand, for communication at the level of modular firing rates when the network is close to the optimal hierarchical level, the variance of information flux is at a minimum, independently of the value of J. This suggests that the hierarchical modular structure may optimize the macroscopic information flow at a lower metabolic cost.
Our model included some simplifications that must be mentioned here because we intend to address them in future studies. First, the model did not have synaptic delays among modules (which would be progressively higher as the distance increases) and spatial mapping. These would take into consideration morphological features of neurons, cell-specific coupling affinities, and the spatial features of the network. Secondly, instead of constant external input, a more realistic type of external drive to network neurons would be noisy input reminiscent of stochastic synaptic events or other noise sources. Thirdly, information transmission was only studied in terms of spontaneous activity and did not consider structured activity patterns as, e.g., the ones that would be generated by sensory stimuli. We may still learn more about information propagation in hierarchical modular networks by extending the current model to situations like these.
Overall, we believe that our work captured with a simple model novel important properties of communication and information processing in networks of spiking neurons. We provided new understanding of how topology may be connected to network dynamics (i.e., slow fluctuations) and information propagation. Our results and techniques could be applied to future research focused on how cortical networks optimize information processing and propagation.

Author Contributions

R.F.O.P., V.L., and A.C.R.: Conceived the work; R.F.O.P., V.L., R.O.S.: Developed the codes and performed the computations; R.F.O.P., V.L., R.O.S., J.P.N., and A.C.R.: Analyzed the results; R.F.O.P., V.L., R.O.S., and A.C.R.: Wrote the manuscript. All authors read and agreed to the published version of the manuscript.

Funding

This paper was developed within the scope of the IRTG 1740/TRP 2015/50122-0, funded by DFG/FAPESP. This work was partially supported by the Research, Innovation and Dissemination Center for Neuromathematics (FAPESP grant 2013/07699-0) and by FAPESP grant 2018/20277-0. R.F.O.P. is supported by a FAPESP Ph.D. scholarship (grant 2013/25667-8), V.L. is supported by a CAPES Ph.D. scholarship. V.L. was partially supported by a FAPESP MSc scholarship (grant 2017/05874-0) at early stages of this work, R.O.S. is supported by a FAPESP Ph.D. scholarship (grant 2017/07688-9) and A.C.R. is partially supported by a CNPq fellowship (grant 306251/2014-0). This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES)—Finance Code 001.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Paxinos, G.; Huang, X.; Toga, A.W. The Rhesus Monkey Brain in Stereotaxic Coordinates; Academic Press: San Diego, CA, USA, 2000. [Google Scholar]
  2. Sporns, O.; Tononi, G.; Ko¨tter, R. The human connectome: A structural description of the human brain. PLoS Comput. Biol. 2005, 1, e42. [Google Scholar] [CrossRef]
  3. Bullmore, E.T.; Bassett, D.S. Brain graphs: Graphical models of the human brain connectome. Annu. Rev. Clin. Psycho. 2011, 7, 113–140. [Google Scholar] [CrossRef] [Green Version]
  4. Sporns, O. The Non-Random Brain: Efficiency, Economy, and Complex Dynamics. Front. Comput. Neurosci. 2011, 5, 5. [Google Scholar] [CrossRef] [Green Version]
  5. Alivisatos, A.P.; Chun, M.; Church, G.M.; Deisseroth, K.; Donoghue, J.P.; Greenspan, R.J.; McEuen, P.L.; Roukes, M.L.; Sejnowski, T.J.; Weiss, P.S.; et al. The brain activity map. Science 2013, 339, 1284–1285. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. da Costa, N.M.; Martin, K.A. Sparse reconstruction of brain circuits: Or, how to survive without a microscopic connectome. Neuroimage 2013, 80, 27–36. [Google Scholar] [CrossRef]
  7. Stephan, K.E. The history of CoCoMac. Neuroimage 2013, 80, 46–52. [Google Scholar] [CrossRef] [Green Version]
  8. Szalkai, B.; Kerepesi, C.; Varga, B.; Grolmusz, V. High-resolution directed human connectomes and the Consensus Connectome Dynamics. PLoS ONE 2019, 14, e0215473. [Google Scholar] [CrossRef] [PubMed]
  9. Potjans, T.C.; Diesmann, M. The cell-type specific cortical microcircuit: Relating structure and activity in a full-scale spiking network model. Cereb. Cortex 2014, 24, 785–806. [Google Scholar] [CrossRef]
  10. Schuecker, J.; Schmidt, M.; van Albada, S.; Diesmann, M.; Helias, M. Fundamental activity constraints lead to specific interpretations of the connectome. PLoS Comput. Biol. 2017, 13, e1005179. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Yamamoto, H.; Moriya, S.; Ide, K.; Hayakawa, T.; Akima, H.; Sato, S.; Kubota, S.; Tanii, T.; Niwano, M.; Teller, S.; et al. Impact of modular organization on dynamical richness in cortical networks. Sci. Adv. 2018, 4, eaau4914. [Google Scholar] [CrossRef] [Green Version]
  12. Avena-Koenigsberger, A.; Misic, B.; Sporns, O. Communication dynamics in complex brain networks. Nat. Rev. Neurosci. 2018, 19, 17. [Google Scholar] [CrossRef] [PubMed]
  13. Laughlin, S.B.; Sejnowski, T.J. Communication in neuronal networks. Science 2003, 301, 1870–1874. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Tkačik, G.; Bialek, W. Information processing in living systems. Annu. Rev. Condens. Matter Phys. 2016, 7, 89–117. [Google Scholar] [CrossRef] [Green Version]
  15. Friston, K.J. Functional and effective connectivity: A review. Brain Connect. 2011, 1, 13–36. [Google Scholar] [CrossRef]
  16. Van Den Heuvel, M.P.; Pol, H.H. Exploring the brain network: A review on resting-state fMRI functional connectivity. Eur. Neuropsychopharm. 2010, 20, 519–534. [Google Scholar] [CrossRef]
  17. Mountcastle, V.B. The columnar organization of the neocortex. Brain 1997, 120, 701–722. [Google Scholar] [CrossRef] [Green Version]
  18. Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Wedeen, V.J.; Sporns, O. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008, 6, e159. [Google Scholar] [CrossRef]
  19. Bullmore, E.; Sporns, O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009, 10, 186. [Google Scholar] [CrossRef]
  20. Kaiser, M.; Hilgetag, C.C. Optimal hierarchical modular topologies for producing limited sustained activation of neural networks. Front. Neuroinform. 2010, 4, 8. [Google Scholar] [CrossRef] [Green Version]
  21. Meunier, D.; Lambiotte, R.; Bullmore, E.T. Modular and hierarchically modular organization of brain networks. Front. Neurosci. 2010, 4, 200. [Google Scholar] [CrossRef] [Green Version]
  22. Shafi, R. Understanding the Hierarchical Organization of Large-Scale Networks Based on Temporal Modulations in Patterns of Neural Connectivity. J. Neurosci. 2018, 38, 3154–3156. [Google Scholar] [CrossRef]
  23. Wang, S.-J.; Hilgetag, C.; Zhou, C. Sustained activity in hierarchical modular neural networks: Self-organized criticality and oscillations. Front. Comput. Neurosci. 2011, 5, 30. [Google Scholar] [CrossRef] [Green Version]
  24. Tomov, P.; Pena, R.F.O.; Zaks, M.A.; Roque, A.C. Sustained oscillations, irregular firing, and chaotic dynamics in hierarchical modular networks with mixtures of electrophysiological cell types. Front. Comput. Neurosci. 2014, 8, 103. [Google Scholar] [CrossRef] [PubMed]
  25. Tomov, P.; Pena, R.F.O.; Roque, A.C.; Zaks, M.A. Mechanisms of self-sustained oscillatory states in hierarchical modular networks with mixtures of electrophysiological cell types. Front. Comput. Neurosci. 2016, 10, 23. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Ostojic, S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nat. Neurosci. 2014, 17, 594–600. [Google Scholar] [CrossRef] [PubMed]
  27. Buehlmann, A.; Deco, G. Optimal information transfer in the cortex through synchronization. PLoS Comput. Biol. 2010, 6, e1000934. [Google Scholar] [CrossRef] [Green Version]
  28. Lukoševičius, M.; Jaeger, H. Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 2009, 3, 127–149. [Google Scholar] [CrossRef]
  29. Rodriguez, N.; Izquierdo, E.; Ahn, Y.Y. Optimal modularity and memory capacity of neural reservoirs. Netw. Neurosci. 2019, 3, 551–566. [Google Scholar] [CrossRef]
  30. Zajzon, B.; Mahmoudian, S.; Morrison, A.; Duarte, R. Passing the message: Representation transfer in modular balanced networks. Front. Comput. Neurosci. 2019, 13, 79. [Google Scholar] [CrossRef] [Green Version]
  31. Shih, C.; Sporns, O.; Yuan, S.; Su, T.; Lin, Y.; Chuang, C.; Wang, T.; Lo, C.; Greenspan, R.J.; Chiang, A. Connectomics-based analysis of information flow in the Drosophila brain. Curr. Biol. 2015, 25, 1249–1258. [Google Scholar] [CrossRef] [Green Version]
  32. Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  33. Hilgetag, C.; Burns, G.; O’Neill, M.; Scannell, J.; Young, M. Anatomical connectivity defines the organization of clusters of cortical areas in the macaque monkey and the cat. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2000, 355, 91–101. [Google Scholar] [CrossRef]
  34. Hilgetag, C.; Kaiser, M. Clustered organization of cortical connectivity. Neuroinformatics 2004, 2, 353–360. [Google Scholar] [CrossRef]
  35. Hendry, S.H.; Schwark, H.D.; Jones, E.G.; Yan, J. Numbers and proportions of GABA-immunoreactive neurons in different areas of monkey cerebral cortex. J. Neurosci. 1987, 7, 1503–1519. [Google Scholar] [CrossRef]
  36. Markram, H.; Toledo-Rodriguez, M.; Wang, Y.; Gupta, A.; Silberberg, G.; Wu, C. Interneurons of the neocortical inhibitory system. Nat. Rev. Neurosci. 2004, 5, 793–807. [Google Scholar] [CrossRef] [PubMed]
  37. Isaacson, J.S.; Scanziani, M. How inhibition shapes cortical activity. Neuron 2011, 72, 231–243. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Fishell, G.; Kepecs, A. Interneuron types as attractors and controllers. Annu. Rev. Neurosci. 2020, 43, 1–30. [Google Scholar] [CrossRef] [Green Version]
  39. Grün, S.; Rotter, S. Analysis of Parallel Spike Trains; Springer: Boston, MA, USA, 2010; Volume 7. [Google Scholar]
  40. Pena, R.F.O.; Vellmer, S.; Bernardi, D.; Roque, A.C.; Lindner, B. Self-consistent scheme for spike-train power spectra in heterogeneous sparse networks. Front. Comput. Neurosci. 2018, 12, 9. [Google Scholar] [CrossRef] [Green Version]
  41. Neiman, A.B.; Yakusheva, T.A.; Russell, D.F. Noise-induced transition to bursting in responses of paddlefish electroreceptor afferents. J. Neurophysiol. 2007, 98, 2795–2806. [Google Scholar] [CrossRef] [PubMed]
  42. Wieland, S.; Bernardi, D.; Schwalger, T.; Lindner, B. Slow fluctuations in recurrent networks of spiking neurons. Phys. Rev. E 2015, 92, 040901. [Google Scholar] [CrossRef] [Green Version]
  43. Schreiber, T. Measuring information transfer. Phys. Rev. Lett. 2000, 85, 461. [Google Scholar] [CrossRef] [Green Version]
  44. Palmigiano, A.; Geisel, T.; Wolf, F.; Battaglia, D. Flexible information routing by transient synchrony. Nat. Neurosci. 2017, 20, 1014–1022. [Google Scholar] [CrossRef] [PubMed]
  45. Ito, S.; Hansen, M.E.; Heiland, R.; Lumsdaine, A.; Litke, A.M.; Beggs, J.M. Extending transfer entropy improves identification of effective connectivity in a spiking cortical network model. PLoS ONE 2011, 6, e27431. [Google Scholar] [CrossRef] [PubMed]
  46. Wibral, M.; Lizier, J.T.; Priesemann, V. Bits from brains for biologically inspired computing. Front. Robot. AI 2015, 2, 5. [Google Scholar] [CrossRef] [Green Version]
  47. de Abril, I.M.; Yoshimoto, J.; Doya, K. Connectivity inference from neural recording data: Challenges, mathematical bases and research directions. Neural Netw. 2018, 102, 120–137. [Google Scholar]
  48. Wibral, M.; Pampu, N.; Priesemann, V.; Siebenhühner, F.; Seiwert, H.; Lindner, M.; Lizier, J.T.; Vicente, R. Measuring information-transfer delays. PLoS ONE 2013, 8, e55809. [Google Scholar] [CrossRef] [PubMed]
  49. Stimberg, M.; Brette, R.; Goodman, D. Brian 2: An intuitive and efficient neural simulator. eLife 2019, 8, e47314. [Google Scholar] [CrossRef]
  50. Repositories: InfoPy, and HMnetwork. Available online: github.com/ViniciusLima94 (accessed on 9 April 2020).
  51. Bair, W.; Koch, C.; Newsome, W.; Britten, K. Power spectrum analysis of bursting cells in area mt in the behaving monkey. J. Neurosci. 1994, 14, 2870–2892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Brunel, M. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J. Comput. Neurosci. 2000, 8, 183–208. [Google Scholar] [CrossRef]
  53. Renart, A.; Rocha, J.D.L.; Bartho, P.; Hollender, L.; Parga, N.; Reyes, A.; Harris, K.D. The Asynchronous State in Cortical Circuits. Science 2010, 327, 587. [Google Scholar] [CrossRef] [Green Version]
  54. Pena, R.F.O.; Zaks, M.A.; Roque, A.C. Dynamics of spontaneous activity in random networks with multiple neuron subtypes and synaptic noise. J. Comput. Neurosci. 2018, 45, 1–28. [Google Scholar] [CrossRef] [Green Version]
  55. Litwin-Kumar, A.; Doiron, B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat. Neurosci. 2012, 15, 1498. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Sporns, O.; Chialvo, D.R.; Kaiser, M.; Hilgetag, C.C. Organization, development and function of complex brain networks. Trends Cogn. Sci. 2004, 8, 418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Reijneveld, J.C.; Ponten, S.C.; Berendse, H.W.; Stam, C.J. The application of graph theoretical analysis to complex networks in the brain. Clin. Neurophysiol. 2007, 118, 2317–2331. [Google Scholar] [CrossRef] [PubMed]
  58. Kinouchi, O.; Copelli, M. Optimal dynamical range of excitable networks at criticality. Nat. Phys. 2006, 2, 348. [Google Scholar] [CrossRef] [Green Version]
  59. Galán, R.F.; Fourcaud-Trocme, N.; Ermentrout, G.B.; Urban, N.N. Correlation-induced synchronization of oscillations in olfactory bulb neurons. J. Neurosci. 2006, 26, 3646. [Google Scholar] [CrossRef] [Green Version]
  60. Moreno-Bote, R.; Renart, A.; Parga, N. Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons. Neural Comput. 2008, 20, 1651. [Google Scholar] [CrossRef]
  61. Barreiro, A.K.; Ly, C. Investigating the correlation–firing rate relationship in heterogeneous recurrent networks. J. Math. Neurosci. 2018, 8, 8. [Google Scholar] [CrossRef] [Green Version]
  62. Sporns, O.; Tononi, G.; Edelman, G.M. Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cereb. Cortex 2000, 10, 127–141. [Google Scholar] [CrossRef] [Green Version]
  63. Vincent, B.T.; Baddeley, R.J. Synaptic energy efficiency in retinal processing. Vis. Res. 2003, 43, 1285–1292. [Google Scholar] [CrossRef] [Green Version]
  64. Harris, J.J.; Jolivet, R.; Attwell, D. Synaptic energy use and supply. Neuron 2012, 75, 762–777. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Examples of hierarchical modular networks of different hierarchical levels. (Upper row) Schematic representation of the network for H = 0, 2, and 3. In the figures, only networks with N = 2 11 and exclusively excitatory neurons were used for the ease of visualization and to highlight the intermodular connections. (Bottom row) Adjacency matrices for networks with N = 2 13 neurons (excitatory and inhibitory in the 4:1 ratio) and the same H levels as in the top row. Each dot represents a connection from a presynaptic neuron to a postsynaptic neuron. Blue dots represent excitatory neurons, and red dots represent inhibitory neurons. For each hierarchical level H, the module numbers are shown below the corresponding adjacency matrix.
Figure 1. Examples of hierarchical modular networks of different hierarchical levels. (Upper row) Schematic representation of the network for H = 0, 2, and 3. In the figures, only networks with N = 2 11 and exclusively excitatory neurons were used for the ease of visualization and to highlight the intermodular connections. (Bottom row) Adjacency matrices for networks with N = 2 13 neurons (excitatory and inhibitory in the 4:1 ratio) and the same H levels as in the top row. Each dot represents a connection from a presynaptic neuron to a postsynaptic neuron. Blue dots represent excitatory neurons, and red dots represent inhibitory neurons. For each hierarchical level H, the module numbers are shown below the corresponding adjacency matrix.
Brainsci 10 00228 g001
Figure 2. Method to measure the delayed transfer entropy using the joint probability distributions. (a) First, we take two spike-trains of a pair of neurons in the network. (b) Then, we apply a delay d in one of them to determine the joint probability distributions p ( x t , y t ) (indicated by the green arrow), p ( x t + 1 + d , x t + d , y t ) (indicated by the red arrow), and p ( y t + 1 , y t ) (indicated by the blue arrow). Next, we estimate the transfer entropy by inserting these distributions into Equation (10). (c) Example plots of T E y x and T E x y for a simple system of two coupled neurons (shown in the inset) with x y connection delay δ x y = 2 and y x connection delay δ y x = 3 . The respective T E s are maximized when the measure delay d is the same as the corresponding connection delay.
Figure 2. Method to measure the delayed transfer entropy using the joint probability distributions. (a) First, we take two spike-trains of a pair of neurons in the network. (b) Then, we apply a delay d in one of them to determine the joint probability distributions p ( x t , y t ) (indicated by the green arrow), p ( x t + 1 + d , x t + d , y t ) (indicated by the red arrow), and p ( y t + 1 , y t ) (indicated by the blue arrow). Next, we estimate the transfer entropy by inserting these distributions into Equation (10). (c) Example plots of T E y x and T E x y for a simple system of two coupled neurons (shown in the inset) with x y connection delay δ x y = 2 and y x connection delay δ y x = 3 . The respective T E s are maximized when the measure delay d is the same as the corresponding connection delay.
Brainsci 10 00228 g002
Figure 3. Raster plot and activity plot of the network for selected values of J and H. For visibility, raster plots show spike times for a sample of only 2560 neurons, but the activity plots refer to all neurons in the network. Each column corresponds to a hierarchical level (from left to right: H = 0 , H = 7 , H = 9 ), and each row corresponds to a synaptic strength ((upper row) J = 0.2 mV; (bottom row) J = 0.8 mV). In the case of modular networks ( H = 7 and H = 9 ), spikes of neurons in the same module are indicated by the same color (black or gray), which alternates from one module to another to ease visualization. Although modules in the network with H = 9 have a smaller number of neurons than modules in the network with H = 7 , the same number of neurons per module was chosen for the cases of H = 7 and H = 9 to allow a comparison.
Figure 3. Raster plot and activity plot of the network for selected values of J and H. For visibility, raster plots show spike times for a sample of only 2560 neurons, but the activity plots refer to all neurons in the network. Each column corresponds to a hierarchical level (from left to right: H = 0 , H = 7 , H = 9 ), and each row corresponds to a synaptic strength ((upper row) J = 0.2 mV; (bottom row) J = 0.8 mV). In the case of modular networks ( H = 7 and H = 9 ), spikes of neurons in the same module are indicated by the same color (black or gray), which alternates from one module to another to ease visualization. Although modules in the network with H = 9 have a smaller number of neurons than modules in the network with H = 7 , the same number of neurons per module was chosen for the cases of H = 7 and H = 9 to allow a comparison.
Brainsci 10 00228 g003
Figure 4. Increases of J and H cause amplification of slow fluctuations and enhance information transfer. (a) Spike-train power spectra computed for J = 0.2 mV and different values of H (indicated by different colors in the plot). (b) Same plot as in (a), but with J = 0.8 mV. (ce) Firing rate ν , Fano factor F F , and correlation time τ c for different values of J (H values indicated by the same colors as in (a,b). (f) Average transfer entropy (computed as in Equation (11)) in a two-dimensional diagram where the abscissa represents synaptic strength J and the ordinate represents hierarchical level H. Values of T E are indicated by the color bar to the right side.
Figure 4. Increases of J and H cause amplification of slow fluctuations and enhance information transfer. (a) Spike-train power spectra computed for J = 0.2 mV and different values of H (indicated by different colors in the plot). (b) Same plot as in (a), but with J = 0.8 mV. (ce) Firing rate ν , Fano factor F F , and correlation time τ c for different values of J (H values indicated by the same colors as in (a,b). (f) Average transfer entropy (computed as in Equation (11)) in a two-dimensional diagram where the abscissa represents synaptic strength J and the ordinate represents hierarchical level H. Values of T E are indicated by the color bar to the right side.
Brainsci 10 00228 g004
Figure 5. Spike-train autocorrelation c x x ( τ ) and cross-correlation c x y ( τ ) for selected pairs of parameters (H,J). Left: c x x . Right: c x y . The selected pairs (J,H) correspond to all possible combinations taken from the sets J = { 0.2 , 0.4 , 0.6 , 0.8 } and H = { 0 , 2 , 4 , 6 , 8 } . For better visualization, c x x and c x y for the pairs (J,H) are plotted over the plot of T E in the J-H diagram. The c x x is extracted from K = 10,000 randomly chosen neurons and the c x y from K = 10,000 randomly chosen pairs of neurons.
Figure 5. Spike-train autocorrelation c x x ( τ ) and cross-correlation c x y ( τ ) for selected pairs of parameters (H,J). Left: c x x . Right: c x y . The selected pairs (J,H) correspond to all possible combinations taken from the sets J = { 0.2 , 0.4 , 0.6 , 0.8 } and H = { 0 , 2 , 4 , 6 , 8 } . For better visualization, c x x and c x y for the pairs (J,H) are plotted over the plot of T E in the J-H diagram. The c x x is extracted from K = 10,000 randomly chosen neurons and the c x y from K = 10,000 randomly chosen pairs of neurons.
Brainsci 10 00228 g005
Figure 6. Relation of connectivity and slow fluctuations. (a) Values of connectivity inside a module ( ϵ in ) as H increases (cf. Equations (14)–(18)). (b) Spike-train power spectra extracted for a small network with N = 2 14 and H = 0 for different values of ϵ .
Figure 6. Relation of connectivity and slow fluctuations. (a) Values of connectivity inside a module ( ϵ in ) as H increases (cf. Equations (14)–(18)). (b) Spike-train power spectra extracted for a small network with N = 2 14 and H = 0 for different values of ϵ .
Brainsci 10 00228 g006
Figure 7. Transfer entropy and mutual information among modules. (a) Transfer entropy evaluated among modules T E ( H ) in the two-dimensional diagram where the ordinate represents the hierarchical level H and the abscissa represents the synaptic strength J. Inset: boxplots of T E ( H ) for fixed values of H. (b) Mutual information among modules M I ( H ) in the same J-H diagram.
Figure 7. Transfer entropy and mutual information among modules. (a) Transfer entropy evaluated among modules T E ( H ) in the two-dimensional diagram where the ordinate represents the hierarchical level H and the abscissa represents the synaptic strength J. Inset: boxplots of T E ( H ) for fixed values of H. (b) Mutual information among modules M I ( H ) in the same J-H diagram.
Brainsci 10 00228 g007
Table 1. Summary of the parameters used in this paper.
Table 1. Summary of the parameters used in this paper.
PARAMETERS
Neuron parameters
NameValueDescription
τ m 20 msMembrane time constant
v th 20 mVFiring threshold
v r 10 mVReset potential
τ R 0.5 msRefractory period
R I ext 30 mVExternal input
Network connectivity parameters
NameValueDescription
N 2 17 Size of excitatory population
ϵ 0.01 Connectivity
R ex 0.9 Excitatory rewiring probability
R in 1Inhibitory rewiring probability
Synaptic parameters
NameValueDescription
J [ 0 ; 1 ] mVExcitatory synaptic strength
g5Relative inhibitory synaptic strength
τ D 0.55 msSynaptic delay

Share and Cite

MDPI and ACS Style

Pena, R.F.O.; Lima, V.; O. Shimoura, R.; Paulo Novato, J.; Roque, A.C. Optimal Interplay between Synaptic Strengths and Network Structure Enhances Activity Fluctuations and Information Propagation in Hierarchical Modular Networks. Brain Sci. 2020, 10, 228. https://doi.org/10.3390/brainsci10040228

AMA Style

Pena RFO, Lima V, O. Shimoura R, Paulo Novato J, Roque AC. Optimal Interplay between Synaptic Strengths and Network Structure Enhances Activity Fluctuations and Information Propagation in Hierarchical Modular Networks. Brain Sciences. 2020; 10(4):228. https://doi.org/10.3390/brainsci10040228

Chicago/Turabian Style

Pena, Rodrigo F. O., Vinicius Lima, Renan O. Shimoura, João Paulo Novato, and Antonio C. Roque. 2020. "Optimal Interplay between Synaptic Strengths and Network Structure Enhances Activity Fluctuations and Information Propagation in Hierarchical Modular Networks" Brain Sciences 10, no. 4: 228. https://doi.org/10.3390/brainsci10040228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop