Next Article in Journal
Use of 6 Nucleotide Length Words to Study the Complexity of Gene Sequences from Different Organisms
Previous Article in Journal
Multivariate Gaussian Copula Mutual Information to Estimate Functional Connectivity with Less Random Architecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Grand Canonical Ensembles of Sparse Networks and Bayesian Inference

by
Ginestra Bianconi
1,2
1
School of Mathematical Sciences, Queen Mary University of London, London E1 4NS, UK
2
The Alan Turing Institute, The British Library, London NW1 2DB, UK
Entropy 2022, 24(5), 633; https://doi.org/10.3390/e24050633
Submission received: 13 April 2022 / Revised: 25 April 2022 / Accepted: 27 April 2022 / Published: 30 April 2022
(This article belongs to the Topic Complex Systems and Network Science)

Abstract

:
Maximum entropy network ensembles have been very successful in modelling sparse network topologies and in solving challenging inference problems. However the sparse maximum entropy network models proposed so far have fixed number of nodes and are typically not exchangeable. Here we consider hierarchical models for exchangeable networks in the sparse limit, i.e., with the total number of links scaling linearly with the total number of nodes. The approach is grand canonical, i.e., the number of nodes of the network is not fixed a priori: it is finite but can be arbitrarily large. In this way the grand canonical network ensembles circumvent the difficulties in treating infinite sparse exchangeable networks which according to the Aldous-Hoover theorem must vanish. The approach can treat networks with given degree distribution or networks with given distribution of latent variables. When only a subgraph induced by a subset of nodes is known, this model allows a Bayesian estimation of the network size and the degree sequence (or the sequence of latent variables) of the entire network which can be used for network reconstruction.

1. Introduction

Networks [1,2] have the ability to capture the topology of complex systems ranging from the brain to financial networks. Network models are key to have reliable unbiased null models of the network and to explain emergent phenomena of network evolution. Network model can be classified in two major classes: equilibrium maximum entropy models [3,4,5,6,7,8,9,10,11,12,13,14,15] and growing network models [1,16,17,18]. While growing network models have a number of nodes that increases in time, maximum entropy models are used so far only for treating networks of a given number of nodes N. In this paper we are interested in extending the realm of maximum entropy network models to networks of varying network size N.
Maximum entropy network ensembles are the least biased ensembles satisfying a given set of constraints. As such maximum entropy ensembles are widely used as null models and for network reconstruction starting from features associated to the nodes of the network. Given the profound relation between information theory and statistical mechanics [19,20], maximum entropy network ensembles can be distinguished between microcanonical ensembles and canonical ensembles [3,21,22] similarly to the analogous distinction traditionally introduced in statistical mechanics for ensembles of particles. Microcanonical network ensembles are ensembles of networks of N nodes satisfying some hard constraints (such as the total number of links, or the given degree sequence). Canonical network ensembles instead are ensembles of networks of N nodes satisfying some soft constraints, (such as the expected total number of links or the expected degree sequence). The canonical ensembles with expected degree sequence can be also formulated as latent variable models where the latent variables can be associated to the nodes [5,23].
Maximum entropy models have been very successful in solving challenging inference models [6,8,24,25,26], however they have the limitation that they only treat networks with a given fixed number of nodes N. Indeed in several scenarios, the number of nodes might not be fixed or might not be known. In this context an important problem is to compare networks of different network sizes. For instance in brain imaging one might choose a finer grid or a coarser grid of brain regions and an outstanding problem in machine learning is how to build neural networks that can generalize well when tested on network data with different network size than the network data in the training set [27,28].
In order to have network ensembles that can treat networks of different size, here we introduce the grand canonical network ensembles in which the number of nodes can vary. A well-defined grand-canonical network ensemble necessarily needs to be exchangeable [29], i.e., needs to be invariant under permutation of labels of the nodes of the network, so that removing or adding a node has an effect that is independent of the particular choice of the node added or removed.
The research on exchangeable networks is currently very vibrant. The graphon model [30] is the most well established exchangeable network model. However this model is dense, i.e., the number of links scales quadratically with the number of nodes while the vast majority of the network data is sparse with a total number of links scaling linearly with the network size. In other words most of the real world networks have constant average degree. However popular models for sparse networks such as the configuration model [31] and the exponential random graphs [4] are not exchangeable. In fact these models treat networks of labelled nodes with given degree or with given expected degree sequence. Therefore the network ensemble is not invariant under permutation of the node labels, except if all the degrees of all the expected degrees of the network are the same (for a more diffused discussion of why these networks are not exchangeable see discussion in ref. [32]). Several works have been proposed exchangeable network models in the when the average degree of the network diverges sublinearly with the network size [33,34,35,36,37,38]. Only recently, in ref. [32], a framework able to model sparse exchangeable networks in the limit of constant degree, has been proposed. The model is very general and has been extended to treat generalized network structures including multiplex networks [39] and simplicial complexes [40]. However the model is well defined only for finite networks of large but finite number of nodes N as exchangeable sparse networks need to obey the Aldous-Hoover theorem [41,42] according to which infinite sparse exchangeable networks must vanish. An alternative strategy for formulating exchangeable ensembles is to consider ensembles of unlabelled networks for which several results are already available [43].
Here we build on the recently proposed exchangeable sparse network ensembles [32] to formulate hierarchical grand-canonical ensembles of sparse networks. The proposed grand-canonical ensembles are hierarchical models [25,44] with variable number of nodes N and with given degree distribution or alternatively given latent variable distributions. The grand canonical approach provides a way to circumvent the limitations imposed by the Aldous-Hoover theorem because in this framework one considers a mixture of network ensembles with finite but unspecified and arbitrary large network sizes. In this paper we define the grand-canonical ensembles and we characterize them with statistical mechanics methods, evaluating their entropy, the marginal probability of a link and proposing generative algorithms to sample networks from these ensembles. [Note that the proposed grand canonical ensembles differ from the ensembles proposed in refs. [45,46], as in our case we consider networks with undetermined number of nodes, while in refs. [45,46] is the total sum of weights of weighted networks that is allowed to vary. From the statistical mechanics perspective our approach is fully classical while in refs. [45,46] networks ensembles are treated as quantum mechanical ensembles where the particles are associated to the links of the network and the adjacency matrix elements play the role of occupation numbers.].
Finally, we use the gran-canonical network ensembles to solve an inference problem. We consider a scenario in which the entire network has an unknown number of nodes, and we have only access to a subgraph induced by a subset of its nodes. In this hypothesis we use the grand-canonical network models to perform a Bayesian estimation of the true parameters of the network model (given by the network size and the degree sequence or the sequence of latent variables). This a posteriori estimate of the parameters can then be used to reconstruct the unknown part of the network.

2. The Grand Canonical Network Ensemble with Given Degree Distribution

We consider the hierarchical grand canonical ensemble of exchangeable sparse simple networks where we associate to every network G = ( V , E ) with N = | V | > N 0 nodes the probability
P ( G ) = P ( N ) P ( k | N ) P ( G | k , N )
where P ( N ) indicates the probability that the network G has N nodes, P ( k | N ) indicates the conditional probability that the network has degree sequence k given that the network has N nodes, and P ( G | k , N ) indicates the probability of the network G with adjacency matrix a given that the network has N nodes and degree sequence k (see Figure 1 for a schematic representation of the model).
To be specific we consider the following model giving rise to the hierarchical grand canonical ensemble of exchangeable simple models:
(1)
Drawing the total number of nodes N of the network. Let us discuss suitable choices for the distribution of the number of nodes N with N greater or equal than some minimum number of nodes N 0 . We indicate the distribution P ( N ) as
P ( N ) = π ( N ) , for N N 0 .
While a statistical mechanics approach would suggest to take a distribution π ( N ) with a well defined mean value (such as the exponential distribution)
π ( N ) = C e μ N for N N 0 ,
where C is a normalization constant and μ > 0 , in the context of network science it might actually be relevant to consider also broad distributions π ( N ) such as power-law distributions
π ( N ) = D N ν for   N N 0 ,
where D is a normalization constant and ν > 1 .
(2)
Drawing the degree sequence of the network. In order to obtain a sparse exchangeable network ensemble with given degree distribution p ( k ) having finite average degree k , minimum allowed degree m ^ and maximum allowed degree K we consider the following expression for the probability of a given degree sequence given the total number of nodes
P ( k | N ) = i = 1 N p ( k i ) θ ^ ( K k i ) θ ( k i m ^ ) δ i = 1 N k i , k N ,
where θ ^ ( x ) indicates the Heaviside function θ ^ ( x ) = 1 if x 0 and θ ^ ( x ) = 0 otherwise and where we used the notation k = k k p ( k ) . In the following we will indicate with L the total number of links of the network given by L = k N / 2 . Note that P ( k | N ) is independent of the labels of the nodes, i.e., all the degree sequences that can be obtained by a permutation of the node labels of a given degree sequence have the same probability P ( k | N ) .
(3)
Drawing the adjacency matrix of the network. The probability of a network G with adjacency matrix a given the total number of nodes N of the network and the degree sequence k is chosen in the least biased way by drawing the network from a uniform distribution, i.e., the conditional probability P ( G | k , N ) is equivalent to the probability of a network in the microcanonical ensemble. Therefore, by indicating with N ( k | N ) the total number of networks with N nodes and degree sequence k and with N ( k ) = ln N ( k | N ) the entropy of the ensemble we can express P ( G | k , N ) as
P ( G | k , N ) = 1 N ( k | N ) = e N ( k )
Note that for sparse networks of N N 0 nodes the entropy N ( k ) obeys the Bender-Canfield formula as long as the network has a structural cutoff K S , i.e., as long as k i K S = k N 0 [3,21,22,47]
N ( k ) = ln ( 2 L ) ! ! i = 1 N k i ! + o ( N )
where in Equation (7) we indicate with k = { k 1 , k 2 , , k N } the degree sequence with k i , the degree of node i, given by k i = j = 1 N a i j .
It follows that the hierarchical grand canonical ensemble for exchangeable sparse networks can be cast into an Hamiltonian ensemble with probability P ( G ) given by
P ( G ) = 1 Z e H ( G ) δ k N / 2 , i < j a i j θ ^ K max i = 1 N k i θ ^ min i = 1 N k i m ^ ,
with Hamiltonian H ( G ) given by
H ( G ) = ln π ( N ) i = 1 N ln p ( k i ) k i ! δ k i , j = 1 N a i j + ln ( ( k N ) ! ! ) .
This Hamiltonian is global and is invariant under permutation of the node labels, therefore this hierarchical grand canonical ensemble is exchangeable. Indeed we have that the probability of a network P ( G ) given by Equation (8) obeys
P ( G ) = P ( G ˜ )
where G ˜ is any network obtained from network G under a generic permutation σ of the labels of the nodes. Moreover we note that for π ( N ) = δ ( N , N ¯ ) , i.e., when the network size is fixed this model reduces to the exchangeable model for sparse network ensemble proposed in ref. [32].

3. The Grand Canonical Network Ensemble with Given Distribution of the Latent Variables

The grand canonical formalism can also be easily extended to treat network models with latent variables θ associated to the nodes of the network G = ( V , E ) . Note that here and in the following we assume that the latent variables take discrete values. To this end we can consider the soft grand canonical hierarchical model associating to each network with N = | V | > N 0 nodes, latent variables θ and adjacency matrix a the probability
P ( G , θ , N ) = P ( N ) P ( θ | N ) P ( G | θ , N )
with
P ( N ) = π ( N ) ,
where π ( N ) is an arbitrary prior on the number of nodes in the network defined for N N 0 . Typical examples of the distribution π ( N ) are given by Equations (3) and (4). The probability of the latent variables is chosen to be exchangeable and given by
P ( θ | N ) = i = 1 N p ( θ i )
where p ( θ i ) is the probability distribution of each latent variable. The distribution p ( θ ) can be chosen arbitrarily, as long as the expectation of θ is finite. The probability of the network given the network size and the latent variables is chosen to be derived by a Bernoulli variable for each link, with probability of observing a link between node i and node j conditioned on the value of their latent variables given by p N ( θ i , θ j ) , i.e.,
P ( G | θ , N ) = i < j p N ( θ i , θ j ) a i j ( 1 p N ( θ i , θ j ) ) 1 a i j .
To be concrete we consider the following expression for the probability p N ( θ i , θ j ) which is the general expression of the marginal probability of a link in canonical network ensembles (or equivalently exponential random graph models),
p N ( θ i , θ j ) = θ i θ j / N 1 + θ i θ j / N .
The advantage of taking this expression for the probability p N ( θ i , θ j ) is that p N ( θ i , θ j ) is always smaller or equal to one for every value of the latent variables. Therefore in this model we do not need to impose a structural cutoff on the latent variables. In summary the grand canonical network ensemble with given latent variable distribution is a hierarchical network model in which given the network size and latent variables the network is drawn according to a canonical ensemble of networks. In this ensemble the probability of a network G can be written in Hamiltonian form as
P ( G ) = 1 Z e H ( G )
with Hamiltonian H ( G ) given by
H ( G ) = ln π ( N ) i = 1 N p N ( θ i ) i < j a i j ln p N ( θ i , θ j ) + ( 1 a i j ) ln [ 1 p N ( θ i , θ j ) ] .
This Hamitonian is invariant under permutation of the node labels, therefore this model is exchangeable.

4. The Entropy of Grand Canonical Ensembles

In this paragraph we show that the entropy S [3,48] of the two proposed grand canonical network ensembles, defined as
S = G P ( G ) ln P ( G ) ,
can be decomposed into contributions that reflect the uncertainty related to an increasing number of hierarchical levels of the model. In order to show this results we discuss separately the entropy of the two proposed grand canonical ensembles.

4.1. Entropy of the Grand Canonical Ensemble with Given Degree Distribution

The entropy S of the ensemble fixing the degree distribution can be decomposed into the entropy of the model at different levels of the hierarchy according to the following expression,
S = S π ( N ) + S p ( k ) π ( N ) + N ( k ) π ( N ) , p ( k )
where S π ( N ) is the entropy associated to the number of typical choices of the total number of nodes N, S p ( k ) π ( N ) is the entropy associated to the choice of the degree sequence averaged over the distribution π ( N ) and N ( k ) π ( N ) , p ( k ) is the average of the Gibbs entropy [3] of the networks with given degree sequence averaged over the distribution π ( N ) and P ( k | N ) . In other words we have
S π ( N ) = N > N 0 π ( N ) ln π ( N ) , S p ( k ) π ( N ) = N > N 0 π ( N ) N k p ( k ) ln p ( k ) , N ( k ) π ( N ) , p ( k ) = N > N 0 π ( N ) k P ( k | N ) N ( k ) .

4.2. Entropy of the Grand Canonical Ensemble with Given Latent Variable Distribution

Similarly to the previous case, it is easy to show that the entropy of the ensemble fixing the distribution of the latent variables can be decomposed into the entropy of the model at different levels of their hierarchy, according to the following expression
S = S π ( N ) + S p ( θ ) π ( N ) + S N ( θ ) π ( N ) , p ( θ ) ,
where S π ( N ) is the entropy associated to the number of typical choices of the total number of nodes N, S p ( θ ) π ( N ) is the entropy associated to the choice of the latent variable distribution averaged over the distribution π ( N ) and S N ( θ ) π ( N ) , p ( k ) is the average of the Shannon entropy [3] of the networks with given sequence of latent variables averaged over the distribution π ( N ) and P ( θ | N ) . In other words we have
S π ( N ) = N > N 0 π ( N ) ln π ( N ) , S p ( θ ) π ( N ) = N > N 0 π ( N ) N θ p ( θ ) ln p ( θ ) , S N ( θ ) π ( N ) , p ( θ ) = N > N 0 π ( N ) θ P ( θ | N ) S N ( θ ) ,
where the Shannon entropy S N ( θ ) of the network given the sequence of latent variables and the network size N can be expressed as
S N ( θ ) = i < j p N ( θ i , θ j ) ln p N ( θ i , θ j ) + ( 1 p N ( θ i , θ j ) ) ln ( 1 p N ( θ i , θ j ) ) .

5. Marginal Probability of a Link

5.1. The Case of the Grand Canonical Ensemble with Given Degree Distribution

The grand canonical ensemble of exchangeable sparse network ensembles is an ensemble in which the total number of nodes is not specified. If we consider the networks of this ensemble having a given number of nodes N, the model reduces to the exchangeable sparse network ensemble proposed in ref. [32] whose marginal probability of a link ( i , j ) is given by
p ˜ i j = k p ( k ) k p ( k ) k k k N .
Since the grand-canonical ensemble of sparse exchangeable networks with given degree distribution can be interpreted as a mixture of the exchangeable sparse models proposed in ref. [32] with different size N, it is immediate to show that the marginal probability of a link between node i and node j in the grand canonical ensembles is given by the exchangeable expression,
p i j = N > N 0 π ( N ) k , k p ( k ) p ( k ) k k k N = N > N 0 π ( N ) k N .
Moreover the probability that two nodes are connected given that they have degree k and k is given by
p i j | k i = k , k j = k = p ( k , k ) = k k N > N 0 π ( N ) k N .
Finally the probability that two nodes are connected given that they have degree k and k and the actual size of the network is N is given by the uncorrelated network expression
p i j | k i = k , k j = k , N = p N ( k , k ) = k k k N .
From these expressions of the marginal probability of a link it is possible to appreciate how the hierarchical grand canonical ensemble of sparse exchangeable networks circumvents the difficulties arising form the Aldous-Hoover theorem without violating it. Indeed the marginal probability p N ( k , k ) of a link conditioned on the degrees of the two linked nodes and the number of nodes N of the network vanishes in the limit N , however if the number of nodes of the network is arbitrarily large but unknown the marginal probability of the link remains finite (as both p i j and p ( k , k ) are finite).

5.2. The Case of the Grand Canonical Ensemble with Given Latent Variable Distribution

For the grand canonical ensemble with given latent variable distribution p ( θ ) we have that the marginal probability of a link is given by
p i j = N > N 0 π ( N ) θ , θ p ( θ ) p ( θ ) p N ( θ , θ ) .
The probability of the link given the latent variable of the nodes is given by
p ( θ , θ ) = θ θ N > N 0 π ( N ) 1 N + θ θ ;
The probability of a link given the network size and the latent variables is given by
p N ( θ , θ ) = θ θ / N 1 + θ θ / N .
As we discussed in the case of the grand canonical ensemble with given degree distribution also for the grand canonical ensemble with given latent variable distribution the grand canonical approach allows to circumvent the Aldous-Hoover theorem without violating it as the marginal probability of a link in an arbitrarily large network of unknown size is finite.

6. Generating Single Instances of Grand-Canonical Network Ensembles

In this section we describe two algorithms to generate single instances of the proposed grand canonical ensembles. In particular we will discuss a Metropolis-Hastings ensemble to generate single instances of networks drawn from the grand canonical ensemble with given degree distribution and a Monte Carlo algorithm to generate single instances of networks drawn from the grand canonical ensemble with given distribution of latent variables.

6.1. Metropolis-Hastings Algorithm for the Grand-Canonical Ensemble with Given Degree Distribution

The grand-canonical exchangeable ensemble of sparse networks can be obtained by implementing a Metropolis-Hastings algorithm using the network Hamiltonian given by Equation (9).
(1)
Start with a network of N nodes having exactly L = k N / 2 links and in which the minimum degree is greater of equal to m ^ and the maximum degree is smaller or equal to K.
(2)
Perform the Metropolis-Hastings algorithm for exchangeable sparse networks with N nodes (defined below);
(3)
Propose to change the number of nodes to N = N + 1 (addition of one node) or N = N 1 (removal of one node) with equal probability and accept the move with probability max 1 , π ( N ) / π ( N ) as long as N > N 0 . If the move is accepted change the number of nodes adding or removing a node, set the number of links to L = k N / 2 and ensure that each node has minimal degree at least m ^ and maximum degree less than K. In particular if a node is added ensure it has at least m ^ links by rewiring randomly the existing links of the networks and adding a number of links so that the total number of links is the integer that better approximates k N / 2 . Instead, if a node needs to be removed, choose a random node of the network remove it and rewire/remove links in order to enforce that the total number of links is the integer that better approximates k N / 2 .
The Metropolis-Hastings algorithm for the exchangeable sparse networks with N nodes is the same algorithm used in Ref. [32] for exchangeable networks with finite size N and is indicated below.
(1)
Start with a network of N nodes having exactly L = k N / 2 links and in which the minimum degree is greater of equal to m ^ and the maximum degree is smaller or equal to K.
(2)
Iterate the following steps until equilibration:
(i)
Let a be the adjacency matrix of the network;
(i)
Choose randomly a random link = ( i , j ) between node i and j and choose a pair of random nodes ( i , j ) not connected by a link.
(ii)
Let a be the adjacency matrix of the network in which the link ( i , j ) is removed and the link ( i , j ) is inserted instead. Draw a random number r from a uniform distribution in [ 0 , 1 ] , i.e., r U ( 0 , 1 ) . If r < max ( 1 , e Δ H ) where Δ H = H ( a ) H ( a ) and if the move does not violate the conditions on the minimum and maximum degree of the network, replace a by a .
The Metropolis-Hastings algorithm can be used to sample the space of networks with variable number of nodes and given (stable) degree distribution (see Figure 2).

6.2. Monte Carlo Generation of Grand Canonical Network Ensemble with Given Latent Variable Distribution

A single instance of the grand canonical model with given latent variable distribution can be obtained by performing the following algorithm:
1
Draw the network size N from the π ( N ) distribution;
2
Draw the latent variable θ i of each node i independently from the latent variable distribution p ( θ ) .
3
Draw each link ( i , j ) of the network with probability p N ( θ i , θ j ) .

7. Bayesian Estimation of the Network Parameters Given Partial Knowledge of the Network

In this section we will use the grand canonical network ensembles for calculating the posterior distribution of the network parameters given partial information of a network G = ( V , E ) . In particular let us assume that we only know the subgraph G ^ ( V ^ , E ^ ) induced by a set of nodes V ^ V of N ^ = | V ^ | nodes and of adjacency matrix a ^ and we do not have access to the full network G with adjacency matrix a . Without loss of generality let us label the nodes of the network in such a way that the labels i with 1 i N ^ indicate the nodes in V ˜ (denote as sampled nodes) and the labels i with i > N ^ indicate the nodes in V \ V ^ (denoted also as unsampled or unknown nodes). We indicate with κ the degree sequence of the sampled network G ^ . Our goal is to make a Bayesian estimation of the network size N and the true network parameters given the observed subgraph G ^ . These a posteriori estimates of the true parameters of the network can then be used to reconstruct the unknown part of the network G.

7.1. Inferring the True Parameters with the Grand Canonical Ensemble with Given Degree Distribution

In this paragraph we will use the grand canonical ensemble with given degree distribution to find the posterior probability distribution of the network parameters. For convenience we will indicate with k i the true degree of the sampled nodes 1 i N ^ and we will indicate q i the true degree of the remaining unsampled N N ^ nodes N ^ + 1 i N . To this end, using the Bayes rule we get the following expression for the posterior distribution of the network parameters given the observed subgraph G ^
P ( N , k , q | G ^ ) = P ( N ) P ( k , q | N ) P ( G ^ | k , q , N ) P ( G ^ )
where
P ( N ) = π ( N ) , P ( k , q | N ) = i = 1 N ^ p ( k i ) i = 1 + N ^ N p ( q i ) , P ( G ^ | k , q , N ) = e Δ N ( k , q | κ ) ,
with Δ N ( k , q | κ ) given by
Δ N ( k , q | κ ) = N ( k , q ) ^ N ( k , q | κ ) .
Here N ( k , q ) indicates the entropy of the network fo size N with degree sequence [ k , q ] whose expression is given by the Bender-Canfield formula [3,21,22,47] (Equation (7)) which reads in this case
N ( k , q ) = ( 2 L ) ! ! i = 1 N ^ k i ! i = 1 + N ^ N q i ! 1 .
Moreover ^ N ( k , q | κ ) indicates the logarithm of the number of networks of N nodes having G ^ (with adjacency matrix a ^ and degree sequence κ ) as induced subgraph between the N ^ sampled nodes.
Moreover in Equation (31) P ( G ^ ) indicates the evidence of the data given by
P ( G ^ ) = N k , q π ( N ) i = 1 N ^ p ( k i ) i = 1 + N ^ N p ( q i ) e Δ N ( k , q | κ ) .
Calculating the entropy ^ N ( k , q | κ ) using statistical mechanics methods including the use of a functional order parameter (see Appendix A), we derive the following expression:
^ N ( k , q | κ ) = ln M ! ( Q M ) ! ! i = 1 N ^ ( k i κ i ) ! i = 1 + N ^ N q i ! Q M δ ( Q + M , 2 L + 2 L ^ )
where M indicates the number of links between the sampled nodes and the unsampled nodes and Q indicates the sum over all the degrees of the unsampled nodes, i.e.,
M = i = 1 N ^ ( k i κ i ) , Q = i = 1 + N ^ N q i ,
where M and Q need to satisfy the constraint enforcing that the total number of true links is given by L = k N / 2 . Therefore, indicating with L ^ = i = 1 N ^ κ i / 2 , we must impose
Q + M = 2 L 2 L ^ .
The expression obtained for the entropy ^ N ( k , q | κ ) implies that the asymptotic expression for the number of networks with N nodes, degree sequence [ k , q ] having G ^ as a subgraph is given by (see Appendix A for the derivation)
N ( k , q | κ , N ) = e ^ N ( k , q | κ ) = M ! ( Q M ) ! ! i = 1 N ^ ( k i κ i ) ! i = 1 + N ^ N q i ! Q M δ ( Q + M , 2 L + 2 L ^ ) .
This expression admits a simple combinatorial interpretation. In fact the networks with degree sequence [ k , q ] having as subgraph G ^ can be constructed by adding (unsampled) links to the graph G ^ . The unsampled part of the network can be constructed by assigning to each node i with 1 i N ^ a number of stubs given by k i κ i and to each node i with i > N ^ a number of stubs given by q i . The unsampled networks can then be obtained by matching the stubs pairwise with the constrains that the stubs of the first N ^ nodes can be only matched with the stubs of the unsampled nodes i > N ^ . Therefore the reconstructed part of the network is formed by a bipartite network between the sampled and the unsampled nodes with a number of links given by M and a simple network among the unsampled nodes with number of links given by ( Q M ) / 2 . The number of matchings of the M links of the bipartite network is given by M ! the number of matching of the stubs of the simple network among unsampled nodes is ( Q M ) ! ! . In order to get the number of distinct networks G with degree sequence [ k , q ] having as subgraph G ^ we need to divide by the number of permutations of the stubs belonging to the same nodes and we need to multiply by Q choose M indicating the number of ways in which we can choose the M stubs of the unsampled nodes to be matched with the stubs of the sampled nodes.
Given the expression for ^ N ( k , q | κ ) provided by Equation (36), we can deduce the explicit expression for Δ N ( k , q | κ ) :
Δ N ( k , q | κ ) = ln i = 1 N ^ k i ! ( k i κ i ) ! M ! ( Q M ) ! ! ( k N ) ! ! Q M δ ( Q + M , 2 L 2 L ^ ) .
It follows that the describe Bayesian inference assigns a probability to the model parameters a probability
P ( N , k , q | G ^ ) π ( N ) i = 1 N ^ p ( k i ) i = 1 + N ^ + 1 N p ( q i ) e Δ N ( k , q | κ ) ,
with Δ N ( k , q | κ ) given by Equation (40). From this expression, imposing with a delta function that M = i = 1 N ^ ( k i κ i ) , expressing the delta in integral form and using the saddle point to evaluate the integral, we can calculate the marginal probability P ( k i | G ^ , ω ) that a sampled node i with 1 i N ^ has true degree k i κ i given M and Q, i.e.,
P ( k i | G ^ , ω ) p ( k i ) k i ! ( k i κ i ) ! e ω k i θ ^ ( k i κ i )
where ω is related to M by
M = i = 1 N ^ k p ( k ) k k ! ( k κ i ) ! e ω k k p ( k ) k ! ( k κ i ) ! e ω k .
In Figure 3 we show the difference between an exponential prior distribution p ( k ) on the degree of the nodes and the posterior marginal probability of the true degree of the sampled nodes P ( k | G ^ , ω ) plotted for different values of the sampled degree κ of the same node. Finally, we can calculate the a posteriori probability P ( N | G ^ , M ) that the real networks has N nodes, conditioned to M and to the sampled subrgraph G ^ . To this end we sum Equation (41) over all the possible values of the degrees k and q such that Equation (37) are satisfied. Therefore, by inserting Equation (40) into Equation (41), enforcing Equation (37) with Kronecker deltas and integrating over all the possible values of k and q we get
P ( N | G ^ , M ) π ( N ) θ ^ ( N N ^ ) C M , N I ( k ) ( M ) I ( q ) ( M , N ) ,
where
C M , N = M ! ( Q M ) ! ! ( k N ) ! ! Q M , I ( k ) ( M ) = k i = 1 N ^ p ( k i ) k i ! ( k i κ i ) ! δ M , i = 1 N ^ k i , I ( q ) ( M , N ) = q i = 1 + N ^ N p ( q i ) δ Q , i = 1 + N ^ N q i ,
where Q = k N 2 L ^ M . By expressing the Kronecker deltas in an integral form according to the expression
δ ( x , y ) = 1 2 π π π e i ω ( x y ) ,
performing a Wick rotation and evaluating the integrals at the saddle point, we can express I ( k ) ( M ) and I ( q ) ( M , N ) as
I ( k ) ( M ) = 1 2 π i = 1 N ^ k > κ i p ( k ) k ! ( k κ i ) ! e ω k e ω M , I ( q ) ( M , N ) = 1 2 π q p ( q ) e ω ¯ q N N ^ e ω ¯ Q ,
with ω and ω ¯ fixed by the saddle point equations
M = i = 1 N ^ k > κ i p ( k ) k ! ( k κ i ) ! k e ω k k > κ i p ( k ) k ! ( k κ i ) ! e ω k , Q = ( N N ^ ) q p ( q ) q e ω ¯ q q p ( q ) e ω ¯ q .
In Figure 4 we display the marginal a posteriori distribution P ( N | G ^ , M ) as function of M demonstrating that the sampled network can modify significantly the prior assumptions on the total number of nodes in the network.

7.2. Inferring the True Parameters with the Grand Canonical Ensemble with Given Latent Variable Distribution

In this section we treat the problem of Bayesian estimation of the parameters of the true network G given the sampled network G ^ using the grand canonical model with given latent variable distribution. Let us indicate with θ i the latent variables of the sampled nodes 1 i N ^ and with ϕ i the latent variables of the unsampled nodes i > N ^ . Using Bayes rule we have
P ( N , θ , ϕ | G ^ ) = P ( N ) P ( θ , ϕ | N ) P ( G ^ | θ , ϕ , N ) P ( G ^ ) ,
where P ( G ^ | θ , ϕ , N ) is independent of ϕ , i.e., P ( G ^ | θ , ϕ , N ) = P ( G ^ | θ , N ) and where
P ( N ) = π ( N ) , P ( θ , ϕ | N ) = i = 1 N ^ p ( θ i ) i = 1 + N ^ N p ( ϕ i ) , P ( G ^ | θ , N ) = i < j | i , j V ^ p N ( θ i , θ j ) a ^ i j ( 1 p N ( θ i , θ j ) ) 1 a ^ i j
with p N ( θ i , θ j ) given by Equation (15) and with a ^ indicating the adjacency matrix of the sampled subgraph G ^ . In Equation (49) P ( G ^ ) indicates the evidence of the data given by
P ( G ^ ) = N θ π ( N ) i = 1 N ^ p ( θ i ) P ( G ^ | θ , N ) .
Since, as we have observed previously, P ( G ^ | θ , ϕ , N ) is independent of ϕ the Bayesian estimation of the parameters ϕ reduces simply to the prior in this case. Therefore we focus here only on the Bayesian estimate of the latent variables θ , i.e., we consider
P ( N , θ | G ^ ) = P ( N ) P ( θ | N ) P ( G ^ | θ , N ) P ( G ^ ) ,
with P ( N ) , P ( G ^ | θ , N ) , P ( G ^ ) having the same definition as above and
P ( θ | N ) = i = 1 N ^ p ( θ i ) .
Using the explicit expression of p N ( θ i , θ j ) given by Equation (15), we can express the likelihood P ( G ^ | θ , N ) of the sampled network as
P ( G ^ | θ , N ) = i = 1 N ^ θ i κ i i < j | i , j V ^ 1 + θ i θ j N 1 1 N L ^ ,
where L ^ is the number of links of the sampled network G ^ . In the limit N 1 we can approximate this expression as
P ( G ^ | θ , N ) d θ ¯ i = 1 N ^ θ i κ i e θ i θ ¯ / 2 1 N L ^ δ θ ¯ , j = 1 N ^ θ j / N
with this approximation we get that the posterior probability P ( N , θ | G ^ ) is given by
P ( N , θ | G ^ ) π ( N ) 1 N L ^ d θ ¯ i = 1 N ^ p ( θ i ) θ i κ i e θ i θ ¯ / 2 δ θ ¯ , j = 1 N ^ θ j / N .
Calculating the marginal posterior probability of a single latent variable conditional of θ ¯ we get
P ( θ i | G ^ , θ ¯ ) = p ( θ i ) θ i κ i e θ i θ ¯ / 2 .
In Figure 3 we show the difference between an exponential prior distribution p ( θ ) on the latent variables of the nodes and the posterior marginal probability of the true latent variables of the sampled nodes P ( θ | G ^ , θ ¯ ) plotted for different values of the sampled degree κ of the same node.
Stating from Equation (56) we can also calculate the posterior distribution P ( N | G ^ ) of the true number of nodes N > N ^ . To this end we express the delta function in an integral form and we sum over all possible latent variables θ , obtaining
P ( N | G ^ ) π ( N ) θ ^ ( N N ^ ) 1 N L ^ 1 1 2 π d θ ¯ d ω e i N ω θ ¯ I ( θ ) ( ω , θ ¯ )
where I ( θ ) ( ω , θ ¯ ) is given by
I ( θ ) = i = 1 N ^ θ p ( θ ) θ κ i e θ ( θ ¯ / 2 i ω ) .
The integrals in Equation (58) can be calculated at the saddle point getting
P ( N | G ^ ) π ( N ) θ ^ ( N N ^ ) 1 N L ^ 1 e N ( θ ¯ ) 2 2 i = 1 N ^ θ p ( θ ) θ κ i e θ θ ¯
where
θ ¯ = 1 N i = 1 N ^ θ p ( θ ) θ κ i + 1 e θ θ ¯ θ p ( θ ) θ κ i e θ θ ¯ .
In Figure 4 we display the marginal a posteriori distribution P ( N | G ^ ) on the true number of nodes in the simplified assumption in which G ^ is regular and all degree κ are the same demonstrating that the sampled network can modify significantly the prior assumptions on the total number of nodes in the network.

8. Conclusions

In this paper we have proposed grand canonical network ensembles formed by networks of varying number of nodes. The grand canonical network ensembles we have introduced are both sparse and exchangeable, i.e., have a finite average degree and are invariant under permutation of the node labels. The grand canonical ensembles are hierarchical network models in which first the network size is selected, then the degree sequence (or the sequence of latent variables) and finally the network adjacency matrix is selected. The model circumvents the difficulties imposed by the Aldous-Hoover theorem that states that exchangeable infinite sparse network ensembles vanish, as the network is a mixture of finite networks, although the networks can have an arbitrarily large network size. Here we show how the grand-canonical ensembles can be used to perform a Bayesian estimation of the network parameters when only partial information about the network structures is known. This a posteriori estimation of the network parameters can then be used for network reconstruction.
The grand canonical framework for sparse exchangeable network ensembles is here described for the case simple networks but has the potential to be extended to generalized network structures including directed, bipartite networks, multiplex networks and simplicial complexes following the lines outlined in ref. [32].
In conclusion we hope that this work, proposing hierarchical grand canonical network ensembles able to treat networks of different size and relating network theory to statistical mechanics will stimulate further results of mathematicians, physicists, and computer scientists working in network science and related machine learning problems.

Funding

G.B. acknowledges support from the Royal Society IEC\NSFC\191147.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Derivation of ^ N (k,q|κ)

In this Appendix our goal is to derive the asymptotic expression of ^ N ( k , q | κ ) in the limit of large network size of the sampled network N ^ 1 , and of the true network N = ( 1 + α ) N ^ 1 with α > 0 .
Let us assume that the sampled subgraph G is the network between the sampled nodes 1 i N ^ and has adjacency matrix a ^ . The true network is instead formed by N nodes with adjacency matrix a . We assume that a has the block structure given by
a = a ^ b b a ˜ ,
where b indicates the N ^ × α N ^ matrix between sampled nodes and the unsampled nodes and a ˜ indicates tha ( α N ^ ) × ( α N ^ ) adjacency matrix among the unsampled nodes. As we have mentioned in the main text ^ N ( k , q | κ ) is the logarithm of the number N ( k , q | κ , N ) of networks (or adjacency matrices a ) with degree sequence [ k , q ] and admitting as a subgraph G ^ having sampled degree sequence κ . In statistical mechanics we also call N ( k , q | κ , N ) the partition function of its corresponding statistical mechanics network model, and we indicate it by Z. In terms of the matrices b and a ˜ the partition function Z = N ( k , q | κ ) = exp ^ N ( k , q | κ ) can be written as
Z = b , a ˜ i = 1 N ^ δ k i j = 1 N a i j i = 1 + N ^ N δ q i j = 1 N a i j δ 2 L i = 1 N k i
Expressing the Kronecker deltas in the integral form and performing the sum over the elements of the matrices b and a ˜ we obtain
Z = D ω D ω ˜ d λ 2 π e G ( ω , ω ˜ , λ )
with
G ( ω , ω ˜ , λ ) = i = 1 N ^ [ i ω i ( k i κ i ) ] + i = 1 + N ^ N [ i ω ˜ i q i ] + i = 1 N ^ j = 1 N ^ ln ( 1 + e i ω i i ω ˜ j i λ ) + 1 2 i = N ^ + 1 N j = N ^ + 1 N ln ( 1 + e i ω ˜ i i ω ˜ j i λ ) + i λ ( L L ^ ) ,
and with D ω = i = 1 N ^ [ d ω i / ( 2 π ) ] and D ω ˜ = i = 1 + N ^ N [ d ω ˜ i / ( 2 π ) ] . Let us now introduce the functional order parameters [22,49,50].
c κ , k ( ω ) = 1 N ^ P ^ ( κ , k ) i = 1 N ^ δ ( ω ω i ) δ ( k , k i ) δ ( κ , κ i ) , ρ q ( ω ˜ ) = 1 α N ^ P ˜ ( q ) i = 1 + N ^ N δ ( ω ˜ ω ˜ i ) δ ( q , q i ) ,
where P ^ ( k , κ ) is the fraction of sampled nodes with degree κ in the sampled network and total inferred degree k; P ˜ ( q ) is the fraction of unsampled nodes with degree q. Moreover we have indicated with L = k N / 2 and with L ^ = i = 1 N ^ κ i / 2 . By enforcing the definition of the order parameters with a series of delta functions we obtain
1 = d c κ , k ( ω ) δ c κ , k ( ω ) 1 N ^ P ^ ( κ , k ) i = 1 N ^ δ ( ω ω i ) δ ( k , k i ) δ ( κ , κ i ) = d c ^ κ , k ( ω ) d c κ , k ( ω ) 2 π / ( N ^ P ^ ( κ , k ) Δ ω ) exp i Δ ω c ^ κ , k ( ω ) [ N ^ P ^ ( κ , k ) c κ , k ( ω ) i = 1 N ^ δ ( ω ω i ) δ ( k , k i ) δ ( κ , κ i ) ] . 1 = d ρ q ( ω ˜ ) δ ρ q ( ω ˜ ) 1 α N ^ P ˜ ( q ) i = 1 + N ^ N δ ( ω ˜ ω ˜ i ) δ ( q , q i ) = d ρ ^ q ( ω ˜ ) d ρ q ( ω ˜ ) 2 π / ( α N ^ P ˜ ( q ) Δ ω ˜ ) exp i Δ ω ˜ ρ ^ q ( ω ˜ ) [ α N ^ P ˜ ( q ) ρ q ( ω ˜ ) i = 1 + N ^ N δ ( ω ˜ ω ˜ i ) δ ( q , q i ) ] .
After inserting these expressions into the partition function in the limit Δ ω 0 , indicating with the sum over the allowed degree range we obtain
Z = κ k q κ , k D c κ , k ( ω ) κ , k D c ^ κ , k ( ω ) q D ρ q ( ω ˜ ) q D ρ ^ q ( ω ˜ ) d λ 2 π e N ^ f
with f = f ( c ( ω , k ) , c ^ ( ω , k ) , ρ ( ω ˜ , q ) , ρ ^ ( ω ˜ , q ) , λ , h ) given by
f = m ^ κ K κ k K P ^ ( κ , k ) i d ω c ^ κ , k ( ω ) c κ , k ( ω ) + α i d ω m ^ q K P ˜ ( q ) ρ ^ q ( ω ˜ ) ρ q ( ω ˜ ) + i λ ( L L ^ ) / N ^ + Ψ + m ^ κ K κ k K P ^ ( κ , k ) ln d ω 2 π e i ω ( k κ ) i c ^ κ , k ( ω , k ) + α m ^ q K P ˜ ( q ) ln d ω ˜ 2 π e i ω ˜ q i ρ ^ q ( ω ˜ ) ,
where Ψ is given by
Ψ = α 2 N ^ 2 m ^ q K , m ^ q K P ˜ ( q ) P ˜ ( q ) d ω d ω ˜ ρ q ( ω ˜ ) ρ q ( ω ˜ ) ln 1 + e i ω ˜ i ω ˜ i λ + α N ^ m ^ κ K κ k K P ^ ( κ , k ) m ^ q K P ˜ ( q ) d ω d ω ˜ c κ , k ( ω ) ρ q ( ω ˜ ) ln 1 + e i ω i ω ˜ i λ ,
and where the functional measures are defined as
D c κ , k ( ω ) = lim Δ ω 0 ω [ d c k κ ( ω ) N ^ P ^ ( κ , k ) Δ ω / ( 2 π ) ] D c ^ κ , k ( ω ) = lim Δ ω 0 ω [ d c ^ κ , k ( ω ) N ^ P ^ ( κ , k ) Δ ω / ( 2 π ) ] , D ρ q ( ω ˜ ) = lim Δ ω ˜ 0 ω ˜ [ d ρ q ( ω ˜ ) N ^ α P ( ˜ q ) Δ ω ˜ / ( 2 π ) ] , D ρ ^ q ( ω ˜ ) = lim Δ ω ˜ 0 ω ˜ [ d ρ ^ q ( ω ˜ ) N ^ α P ˜ ( q ) Δ ω ˜ / ( 2 π ) ] .
By putting
e i λ = z N ^ ,
and performing a Wick rotation in λ and assuming z / N ^ = e i λ real and much smaller than one, i.e., z / N ^ 1 which is allowed in the sparse regime, we can linearize the logarithm and express Ψ as
Ψ = z α ν 1 2 α ν + ν ^ ,
with
ν = m ^ q K P ˜ ( q ) d ω ˜ ρ q ( ω ˜ ) e i ω ˜ . ν ^ = m ^ κ K κ ^ k K P ^ ( κ , k ) d ω c κ , k ( ω ) e i ω .
The saddle point equations determining the value of the partition function can be obtained by performing the (functional) derivative of f with respect to the functional order parameters, obtaining
i c ^ κ , k ( ω ) = z α ν e i ω , i ρ ^ q ( ω ) = z ( α ν + ν ^ ) e i ω ˜ , c κ , k ( ω ) = P ^ ( κ , k ) 1 2 π e i ω ( k κ ) i c ^ κ , k ( ω ) d ω 2 π e i ω ( k κ ) i c ^ κ , k ( ω ) , ρ q ( ω ˜ ) = P ˜ ( q ) 1 2 π e i ω ˜ q i ρ ^ q ( ω ˜ ) d ω ˜ 2 π e i ω ˜ q i ρ ^ q ( ω ˜ ) , 2 L L ^ N ^ = z α ν α ν + 2 ν ^ .
Let us first calculate the integrals
I κ , k = d ω 2 π e i ω ( k κ ) i c ^ κ , k ( ω ) = 1 ( k κ ) ! ( z α ν ) k κ , I q = d ω ˜ 2 π e i ω ˜ q i ρ ^ q ( ω ˜ ) = 1 q ! [ z ( α ν + ν ^ ) ] q ,
Using these expressions for the integral we can write the functional order parameters as
c κ , k ( ω ) = P ^ ( κ , k ) 1 2 π e i ω ( k κ ) + ( z α ν ) e i ω I κ , k , ρ q ( ω ˜ ) = P ˜ ( q ) 1 2 π e i ω ˜ q + [ z ν ( α ν + ν ^ ) ] e i ω ˜ I q .
With this expression, using a similar procedure we can express ν as
ν ^ = d ω m ^ κ K κ k K c κ , k ( ω ) e i ω = κ ^ k K P ^ ( κ , k ) ( k κ ) ( α z ν ) 1 . ν = d ω ˜ m ^ q K ρ q ( ω ˜ ) e i ω ˜ = m ^ q K P ˜ ( q ) q [ z ( α ν + ν ^ ) ] 1 .
Combing these equations with the last saddle point equation it is immediate to show that z , ν and ν ^ are given by
z = 1 , α ν = ( Q M ) / N ^ , ν ^ = M / N ^ ( Q M ) / N ^ .
with
2 L 2 L ^ = M + Q .
Calculating the free energy N ^ f at the saddle point, we get
N ^ f = 1 2 ( Q M ) M + ( L L ^ ) ln N ^ + N ^ m ^ κ K κ ^ k K P ^ ( κ , k ) ln ( α ν ) k κ ( k κ ) ! + α N ^ m ^ q K P ˜ ( q ) ln [ α ν + ν ^ ] q q ! ,
which leads to the following asymptotic expression for Z = N ( k , q | κ , N ) = exp ^ N ( k , q | κ )
Z = N ( k , q | κ , N ) M ! ( Q M ) ! ! i = 1 N ^ k i ! i = 1 + N ^ N q i ! Q M .

References

  1. Barabási, A.L. Network Science; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  2. Newman Mark, E. Networks: An Introduction; Oxford University Press: Cambridge, UK, 2010. [Google Scholar]
  3. Anand, K.; Bianconi, G. Entropy measures for networks: Toward an information theory of complex topologies. Phys. Rev. E 2009, 80, 045102. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Park, J.; Newman, M.E. Statistical mechanics of networks. Phys. Rev. E 2004, 70, 066117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Bianconi, G. Information theory of spatial network ensembles. In Handbook on Entropy, Complexity and Spatial Dynamics; Edward Elgar Publishing: Cheltenham, UK, 2021. [Google Scholar]
  6. Cimini, G.; Squartini, T.; Saracco, F.; Garlaschelli, D.; Gabrielli, A.; Caldarelli, G. The statistical physics of real-world networks. Nat. Rev. Phys. 2019, 1, 58–71. [Google Scholar] [CrossRef] [Green Version]
  7. Krioukov, D.; Papadopoulos, F.; Kitsak, M.; Vahdat, A.; Boguná, M. Hyperbolic geometry of complex networks. Phys. Rev. E 2010, 82, 036106. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Orsini, C.; Dankulov, M.M.; Colomer-de Simón, P.; Jamakovic, A.; Mahadevan, P.; Vahdat, A.; Bassler, K.E.; Toroczkai, Z.; Boguná, M.; Caldarelli, G.; et al. Quantifying randomness in real networks. Nat. Commun. 2015, 6, 8627. [Google Scholar] [CrossRef]
  9. Peixoto, T.P. Entropy of stochastic blockmodel ensembles. Phys. Rev. E 2012, 85, 056122. [Google Scholar] [CrossRef] [Green Version]
  10. Radicchi, F.; Krioukov, D.; Hartle, H.; Bianconi, G. Classical information theory of networks. J. Phys. Complex. 2020, 1, 025001. [Google Scholar] [CrossRef]
  11. Pessoa, P.; Costa, F.X.; Caticha, A. Entropic dynamics on Gibbs statistical manifolds. Entropy 2021, 23, 494. [Google Scholar] [CrossRef]
  12. Kim, H.; Del Genio, C.I.; Bassler, K.E.; Toroczkai, Z. Constructing and sampling directed graphs with given degree sequences. New J. Phys. 2012, 14, 023012. [Google Scholar] [CrossRef] [Green Version]
  13. Del Genio, C.I.; Kim, H.; Toroczkai, Z.; Bassler, K.E. Efficient and exact sampling of simple graphs with given arbitrary degree sequence. PLoS ONE 2010, 5, e10012. [Google Scholar] [CrossRef] [Green Version]
  14. Coolen, A.C.; Annibale, A.; Roberts, E. Generating Random Networks and Graphs; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  15. Bassler, K.E.; Del Genio, C.I.; Erdős, P.L.; Miklós, I.; Toroczkai, Z. Exact sampling of graphs with prescribed degree correlations. New J. Phys. 2015, 17, 083052. [Google Scholar] [CrossRef]
  16. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Dorogovtsev, S.N.; Dorogovtsev, S.N.; Mendes, J.F. Evolution of Networks: From Biological Nets to the Internet and WWW; Oxford University Press: Oxford, UK, 2003. [Google Scholar]
  18. Kharel, S.R.; Mezei, T.R.; Chung, S.; Erdős, P.L.; Toroczkai, Z. Degree-preserving network growth. Nat. Phys. 2021, 18, 100–106. [Google Scholar] [CrossRef]
  19. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  20. Huang, K. Introduction to Statistical Physics; Chapman and Hall: London, UK; CRC: Boca Raton, FL, USA, 2009. [Google Scholar]
  21. Anand, K.; Bianconi, G. Gibbs entropy of network ensembles by cavity methods. Phys. Rev. E 2010, 82, 011116. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Bianconi, G.; Coolen, A.C.; Vicente, C.J.P. Entropies of complex networks with hierarchically constrained topologies. Phys. Rev. E 2008, 78, 016114. [Google Scholar] [CrossRef] [Green Version]
  23. Caldarelli, G.; Capocci, A.; De Los Rios, P.; Munoz, M.A. Scale-free networks from varying vertex intrinsic fitness. Phys. Rev. Lett. 2002, 89, 258702. [Google Scholar] [CrossRef] [Green Version]
  24. Bianconi, G.; Pin, P.; Marsili, M. Assessing the relevance of node features for network structure. Proc. Natl. Acad. Sci. USA 2009, 106, 11433–11438. [Google Scholar] [CrossRef] [Green Version]
  25. Airoldi, E.M.; Blei, D.; Fienberg, S.; Xing, E. Mixed membership stochastic blockmodels. Adv. Neural Inf. Process. Syst. 2008, 21, 1981–2014. [Google Scholar]
  26. Ghavasieh, A.; Nicolini, C.; De Domenico, M. Statistical physics of complex information dynamics. Phys. Rev. E 2020, 102, 052304. [Google Scholar] [CrossRef]
  27. Bevilacqua, B.; Zhou, Y.; Ribeiro, B. Size-invariant graph representations for graph classification extrapolations. In Proceedings of the International Conference on Machine Learning, PMLR, London, UK, 8–11 November 2021; pp. 837–851. [Google Scholar]
  28. Cotta, L.; Morris, C.; Ribeiro, B. Reconstruction for powerful graph representations. Adv. Neural Inf. Process. Syst. 2021, 34. [Google Scholar] [CrossRef]
  29. De Finetti, B. Funzione Caratteristica Di un Fenomeno Aleatorio; Accademia Nazionale Lincei: Rome, Italy, 1931; Volume 4. [Google Scholar]
  30. Lovász, L. Large Networks and Graph Limits; American Mathematical Society: Providence, RI, USA, 2012; Volume 60. [Google Scholar]
  31. Chung, F.; Lu, L. The average distances in random graphs with given expected degrees. Proc. Natl. Acad. Sci. USA 2002, 99, 15879–15882. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Bianconi, G. Statistical physics of exchangeable sparse simple networks, multiplex networks, and simplicial complexes. Phys. Rev. E 2022, 105, 034310. [Google Scholar] [CrossRef] [PubMed]
  33. Caron, F.; Fox, E.B. Sparse graphs using exchangeable random measures. J. R. Stat. Soc. Ser. Stat. Methodol. 2017, 79, 1295. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Borgs, C.; Chayes, J.T.; Cohn, H.; Holden, N. Sparse exchangeable graphs and their limits via graphon processes. arXiv 2016, arXiv:1601.07134. [Google Scholar]
  35. Veitch, V.; Roy, D.M. The class of random graphs arising from exchangeable random measures. arXiv 2015, arXiv:1512.03099. [Google Scholar]
  36. Veitch, V.; Roy, D.M. Sampling and estimation for (sparse) exchangeable graphs. Ann. Stat. 2019, 47, 3274–3299. [Google Scholar] [CrossRef] [Green Version]
  37. Borgs, C.; Chayes, J.T.; Smith, A. Private graphon estimation for sparse graphs. arXiv 2015, arXiv:1506.06162. [Google Scholar]
  38. Borgs, C.; Chayes, J.; Smith, A.; Zadik, I. Revealing network structure, confidentially: Improved rates for node-private graphon estimation. In Proceedings of the 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), Paris, France, 7–9 October 2018; pp. 533–543. [Google Scholar]
  39. Bianconi, G. Multilayer Networks: Structure and Function; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  40. Bianconi, G. Higher-Order Networks: An Introduction to Simplicial Complexes; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  41. Aldous, D.J. Representations for partially exchangeable arrays of random variables. J. Multivar. Anal. 1981, 11, 581–598. [Google Scholar] [CrossRef] [Green Version]
  42. Hoover, D.N. Relations on Probability Spaces and Arrays of Random Variables; Institute for Advanced Study: Princeton, NJ, USA, 1979; Volume 2, p. 275. [Google Scholar]
  43. Paton, J.; Hartle, H.; Stepanyants, J.; van der Hoorn, P.; Krioukov, D. Entropy of labeled versus unlabeled networks. arXiv 2022, arXiv:2204.08508. [Google Scholar]
  44. Peixoto, T.P. Hierarchical block structures and high-resolution model selection in large networks. Phys. Review X 2014, 4, 011047. [Google Scholar] [CrossRef] [Green Version]
  45. Gabrielli, A.; Mastrandrea, R.; Caldarelli, G.; Cimini, G. Grand canonical ensemble of weighted networks. Phys. Rev. E 2019, 99, 030301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Straka, M.J.; Caldarelli, G.; Saracco, F. Grand canonical validation of the bipartite international trade network. Phys. Rev. E 2017, 96, 022306. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Bender, E.A.; Canfield, E.R. The asymptotic number of labeled graphs with given degree sequences. J. Comb. Theory Ser. A 1978, 24, 296–307. [Google Scholar] [CrossRef] [Green Version]
  48. Bianconi, G. Entropy of network ensembles. Phys. Rev. E 2009, 79, 036114. [Google Scholar] [CrossRef] [Green Version]
  49. Courtney, O.T.; Bianconi, G. Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes. Phys. Rev. E 2016, 93, 062311. [Google Scholar] [CrossRef] [Green Version]
  50. Monasson, R.; Zecchina, R. Statistical mechanics of the random K-satisfiability model. Phys. Rev. E 1997, 56, 1357. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic representation of the hierarchical grand canonical ensemble of exchangeable sparse simple networks. The proposed ensemble is a hierarchical model of networks in which first the total number of nodes N is drawn from a P ( N ) = π ( N ) distribution, then a given degree sequence k = { k 1 , k 2 , k N } is drawn from the distribution P ( k | N ) among all the degree sequence with the total number of nodes N; finally a network G with adjacency matrix a drawn from the distribution P ( G | k , N ) among all the networks with a given total number of nodes N and degree sequence k . Panel (a) describes the hierarchical nature of the model, panel (b) provide an example of subsequent draw of the total number of nodes, the degree sequence and the adjacency matrix of the network, panel (c) is a visualization of the construction of a network according to the proposed model.
Figure 1. Schematic representation of the hierarchical grand canonical ensemble of exchangeable sparse simple networks. The proposed ensemble is a hierarchical model of networks in which first the total number of nodes N is drawn from a P ( N ) = π ( N ) distribution, then a given degree sequence k = { k 1 , k 2 , k N } is drawn from the distribution P ( k | N ) among all the degree sequence with the total number of nodes N; finally a network G with adjacency matrix a drawn from the distribution P ( G | k , N ) among all the networks with a given total number of nodes N and degree sequence k . Panel (a) describes the hierarchical nature of the model, panel (b) provide an example of subsequent draw of the total number of nodes, the degree sequence and the adjacency matrix of the network, panel (c) is a visualization of the construction of a network according to the proposed model.
Entropy 24 00633 g001
Figure 2. Results of the Metropolis-Hastings algorithm for generating grand canonical ensembles with given degree distribution. The number of nodes N ( t ) as a function of time t in the Metropolis-Hastings simulation of an exponential networks (panel (a)) and networks with more general degree distribution (panel (c)) are shown together with the average degree distribution of the networks that is stable as the number of networks varies (symbols of panel (b) and (d)). The solid lines in panel (b) and panel (d) indicate the target degree distributions p ( k ) = C e k / m with m = 5 (for panel (b)) and p ( k ) = C ( 3 + k ) γ with γ = 3.4 (for panel (d)). The prior on the number of nodes is taken to be exponential π ( N ) = C e N / N ¯ with N ¯ = 1000 with N 0 = 500 and K = 16 .
Figure 2. Results of the Metropolis-Hastings algorithm for generating grand canonical ensembles with given degree distribution. The number of nodes N ( t ) as a function of time t in the Metropolis-Hastings simulation of an exponential networks (panel (a)) and networks with more general degree distribution (panel (c)) are shown together with the average degree distribution of the networks that is stable as the number of networks varies (symbols of panel (b) and (d)). The solid lines in panel (b) and panel (d) indicate the target degree distributions p ( k ) = C e k / m with m = 5 (for panel (b)) and p ( k ) = C ( 3 + k ) γ with γ = 3.4 (for panel (d)). The prior on the number of nodes is taken to be exponential π ( N ) = C e N / N ¯ with N ¯ = 1000 with N 0 = 500 and K = 16 .
Entropy 24 00633 g002
Figure 3. Marginal posterior probability for the true degree and of the true latent variable of a sampled node. The posterior probability P ( k i | G ^ , ω ) (panel (a)) of the true degree of a sampled nodes depends on the degree κ of the nodes in the sampled network G ^ and is non-zero only for k κ . The posterior probability P ( θ | G ^ , θ ¯ ) of the latent variable of a sampled node (panel (b)) can be non-zero on the entire range of θ values allowed by the prior. Here we have plotted P ( k i | G ^ , ω ) and P ( θ | G ^ , θ ¯ ) for different values of κ and we have chosen ω = 2 and θ ¯ = 0.6 . The dashed lines indicate the exponential prior on the degrees (panel (a)) and on the latent variables (panel (b)).
Figure 3. Marginal posterior probability for the true degree and of the true latent variable of a sampled node. The posterior probability P ( k i | G ^ , ω ) (panel (a)) of the true degree of a sampled nodes depends on the degree κ of the nodes in the sampled network G ^ and is non-zero only for k κ . The posterior probability P ( θ | G ^ , θ ¯ ) of the latent variable of a sampled node (panel (b)) can be non-zero on the entire range of θ values allowed by the prior. Here we have plotted P ( k i | G ^ , ω ) and P ( θ | G ^ , θ ¯ ) for different values of κ and we have chosen ω = 2 and θ ¯ = 0.6 . The dashed lines indicate the exponential prior on the degrees (panel (a)) and on the latent variables (panel (b)).
Entropy 24 00633 g003
Figure 4. Marginal posterior probability for the true number of nodes in the grand canonical ensemble with given degree distribution and in the grand canonical ensemble with given latent variable distribution. The posterior probability P ( N | G ^ , M ) in panel (a) of the true number of nodes depends on the total number M of true but not observed links of the sampled nodes and on the total number of sampled links L ^ ; the posterior probability P ( N | G ^ ) in panel (b) depends instead only on the degree κ of the nodes in the sampled network G ^ . We took N 0 = 100 and the priors given by π ( N ) e N / N ^ , p ( k ) e k / m , p ( θ ) e θ / m with N ^ = 200 , and m = 7 . In panel (a) we have plotted P ( N | G ^ , M ) for different values of M = ( k n ) N ^ with n = 1 , 2 , 3 , 4 and L ^ = N ^ / 2 ; in panel (b) we have plotted P ( N | G ^ ) assuming that G ^ is regular with all sampled nodes having sampled degree κ = 1 , 2 , 3 , 4 , 5 . The dashed lines indicate the exponential prior π ( N ) on the number of nodes.
Figure 4. Marginal posterior probability for the true number of nodes in the grand canonical ensemble with given degree distribution and in the grand canonical ensemble with given latent variable distribution. The posterior probability P ( N | G ^ , M ) in panel (a) of the true number of nodes depends on the total number M of true but not observed links of the sampled nodes and on the total number of sampled links L ^ ; the posterior probability P ( N | G ^ ) in panel (b) depends instead only on the degree κ of the nodes in the sampled network G ^ . We took N 0 = 100 and the priors given by π ( N ) e N / N ^ , p ( k ) e k / m , p ( θ ) e θ / m with N ^ = 200 , and m = 7 . In panel (a) we have plotted P ( N | G ^ , M ) for different values of M = ( k n ) N ^ with n = 1 , 2 , 3 , 4 and L ^ = N ^ / 2 ; in panel (b) we have plotted P ( N | G ^ ) assuming that G ^ is regular with all sampled nodes having sampled degree κ = 1 , 2 , 3 , 4 , 5 . The dashed lines indicate the exponential prior π ( N ) on the number of nodes.
Entropy 24 00633 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bianconi, G. Grand Canonical Ensembles of Sparse Networks and Bayesian Inference. Entropy 2022, 24, 633. https://doi.org/10.3390/e24050633

AMA Style

Bianconi G. Grand Canonical Ensembles of Sparse Networks and Bayesian Inference. Entropy. 2022; 24(5):633. https://doi.org/10.3390/e24050633

Chicago/Turabian Style

Bianconi, Ginestra. 2022. "Grand Canonical Ensembles of Sparse Networks and Bayesian Inference" Entropy 24, no. 5: 633. https://doi.org/10.3390/e24050633

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop