entropy-logo

Journal Browser

Journal Browser

Information Decomposition of Target Effects from Multi-Source Interactions

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 June 2017) | Viewed by 110226

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, New South Wales, Australia
Interests: information theory; complex systems; information transfer; information dynamics; complex networks; dynamical systems; transfer entropy; computational neuroscience

E-Mail Website
Guest Editor
Frankfurt Institute of Advanced Studies (FIAS), Frankfurt, Germany

E-Mail Website
Guest Editor
1. Max Planck Institute for Mathematics in the Sciences, 04103 Leipzig, Germany
2. Santa Fe Institute for the Sciences of Complexity, Santa Fe, NM, USA

E-Mail Website
Guest Editor
1. MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
2. Max Planck Institute for Dynamics and Self-Organisation, Goettingen, Germany

Special Issue Information

Dear Colleagues,

Shannon information theory has provided rigorous ways to capture our intuitive notions regarding uncertainty and information, and made an enormous impact in doing so. One of the fundamental measures here is mutual information, which captures the average information contained in one variable about another, and vice versa. If we have two source variables and a target, for example, we can measure the information held by one source about the target, the information held by the other source about the target, and the information held by those sources together about the target. Any other notion about the directed information relationship between these variables, which can be captured by classical information-theoretic measures (e.g., conditional mutual information terms) is linearly redundant with those three quantities.

However, intuitively, there is strong desire to measure further notions of how this directed information interaction may be decomposed, e.g., how much information the two source variables hold redundantly about the target, how much each source variable holds uniquely, and how much information can only be discerned by synergistically examining the two sources together. These notions go beyond the traditional information-theoretic view of a channel serving the purpose of reliable communication, considering now the situation of multiple communication streams converging on a single target. This is a common situation in biology, and in particular in neuroscience, where, say, the ability of a target to synergistically fuse multiple information sources in a non-trivial fashion is likely to have its own intrinsic value, independently of reliability of communication.

The absence of measures for such decompositions into redundant, unique and synergistic information is arguably the most fundamental missing piece in classical information theory. Triggered by the formulation of the Partial Information Decomposition framework by Williams and Beer in 2010, the past few years have witnessed a concentration of work by the community in proposing, contrasting, and investigating new measures to capture these notions of information decomposition. Other theoretical developments consider how these measures relate to concepts of information processing in terms of storage, transfer and modification. Meanwhile, computational neuroscience has emerged as a primary application area due to significant interest in questions surrounding how target neurons integrate information from large numbers of sources, as well as the availability of data sets to investigate these questions on.

This Special Issue seeks to bring together these efforts, to capture a snapshot of the current research, as well as to provide impetus for and focused scrutiny on newer work. We also seek to present progress to the wider community and attract further research. We welcome research articles proposing new measures or pointing out future directions, review articles on existing approaches, commentary on properties and limitations of such approaches, philosophical contributions on how such measures may be used or interpreted, applications to empirical data (e.g., neural imaging data), and more.

Dr. Joseph Lizier
Dr. Nils Bertschinger
Prof. Michael Wibral
Prof. Juergen Jost
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Shannon information
  • information theory
  • information decomposition
  • mutual information
  • synergy
  • redundancy
  • shared information
  • transfer entropy

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

10 pages, 386 KiB  
Editorial
Information Decomposition of Target Effects from Multi-Source Interactions: Perspectives on Previous, Current and Future Work
by Joseph T. Lizier, Nils Bertschinger, Jürgen Jost and Michael Wibral
Entropy 2018, 20(4), 307; https://doi.org/10.3390/e20040307 - 23 Apr 2018
Cited by 74 | Viewed by 7383
Abstract
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source [...] Read more.
The formulation of the Partial Information Decomposition (PID) framework by Williams and Beer in 2010 attracted a significant amount of attention to the problem of defining redundant (or shared), unique and synergistic (or complementary) components of mutual information that a set of source variables provides about a target. This attention resulted in a number of measures proposed to capture these concepts, theoretical investigations into such measures, and applications to empirical data (in particular to datasets from neuroscience). In this Special Issue on “Information Decomposition of Target Effects from Multi-Source Interactions” at Entropy, we have gathered current work on such information decomposition approaches from many of the leading research groups in the field. We begin our editorial by providing the reader with a review of previous information decomposition research, including an overview of the variety of measures proposed, how they have been interpreted and applied to empirical investigations. We then introduce the articles included in the special issue one by one, providing a similar categorisation of these articles into: i. proposals of new measures; ii. theoretical investigations into properties and interpretations of such approaches, and iii. applications of these measures in empirical studies. We finish by providing an outlook on the future of the field. Full article
Show Figures

Graphical abstract

Research

Jump to: Editorial

36 pages, 529 KiB  
Article
Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices
by Conor Finn and Joseph T. Lizier
Entropy 2018, 20(4), 297; https://doi.org/10.3390/e20040297 - 18 Apr 2018
Cited by 52 | Viewed by 7804
Abstract
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The [...] Read more.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example. Full article
Show Figures

Figure 1

1379 KiB  
Article
Analyzing Information Distribution in Complex Systems
by Sten Sootla, Dirk Oliver Theis and Raul Vicente
Entropy 2017, 19(12), 636; https://doi.org/10.3390/e19120636 - 24 Nov 2017
Cited by 7 | Viewed by 4762
Abstract
Information theory is often utilized to capture both linear as well as nonlinear relationships between any two parts of a dynamical complex system. Recently, an extension to classical information theory called partial information decomposition has been developed, which allows one to partition the [...] Read more.
Information theory is often utilized to capture both linear as well as nonlinear relationships between any two parts of a dynamical complex system. Recently, an extension to classical information theory called partial information decomposition has been developed, which allows one to partition the information that two subsystems have about a third one into unique, redundant and synergistic contributions. Here, we apply a recent estimator of partial information decomposition to characterize the dynamics of two different complex systems. First, we analyze the distribution of information in triplets of spins in the 2D Ising model as a function of temperature. We find that while redundant information obtains a maximum at the critical point, synergistic information peaks in the disorder phase. Secondly, we characterize 1D elementary cellular automata rules based on the information distribution between neighboring cells. We describe several clusters of rules with similar partial information decomposition. These examples illustrate how the partial information decomposition provides a characterization of the emergent dynamics of complex systems in terms of the information distributed across their interacting units. Full article
Show Figures

Figure 1

262 KiB  
Article
Secret Sharing and Shared Information
by Johannes Rauh
Entropy 2017, 19(11), 601; https://doi.org/10.3390/e19110601 - 09 Nov 2017
Cited by 20 | Viewed by 4583
Abstract
Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are [...] Read more.
Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret. As such, they provide perfect model systems for information decompositions. However, following this intuition too far leads to an information decomposition with negative partial information terms, which are difficult to interpret. One possible explanation is that the partial information lattice proposed by Williams and Beer is incomplete and has to be extended to incorporate terms corresponding to higher-order redundancy. These results put bounds on information decompositions that follow the partial information framework, and they hint at where the partial information lattice needs to be improved. Full article
Show Figures

Figure 1

5368 KiB  
Article
Partial and Entropic Information Decompositions of a Neuronal Modulatory Interaction
by Jim W. Kay, Robin A. A. Ince, Benjamin Dering and William A. Phillips
Entropy 2017, 19(11), 560; https://doi.org/10.3390/e19110560 - 26 Oct 2017
Cited by 16 | Viewed by 5012
Abstract
Information processing within neural systems often depends upon selective amplification of relevant signals and suppression of irrelevant signals. This has been shown many times by studies of contextual effects but there is as yet no consensus on how to interpret such studies. Some [...] Read more.
Information processing within neural systems often depends upon selective amplification of relevant signals and suppression of irrelevant signals. This has been shown many times by studies of contextual effects but there is as yet no consensus on how to interpret such studies. Some researchers interpret the effects of context as contributing to the selective receptive field (RF) input about which neurons transmit information. Others interpret context effects as affecting transmission of information about RF input without becoming part of the RF information transmitted. Here we use partial information decomposition (PID) and entropic information decomposition (EID) to study the properties of a form of modulation previously used in neurobiologically plausible neural nets. PID shows that this form of modulation can affect transmission of information in the RF input without the binary output transmitting any information unique to the modulator. EID produces similar decompositions, except that information unique to the modulator and the mechanistic shared component can be negative when modulating and modulated signals are correlated. Synergistic and source shared components were never negative in the conditions studied. Thus, both PID and EID show that modulatory inputs to a local processor can affect the transmission of information from other inputs. Contrary to what was previously assumed, this transmission can occur without the modulatory inputs becoming part of the information transmitted, as shown by the use of PID with the model we consider. Decompositions of psychophysical data from a visual contrast detection task with surrounding context suggest that a similar form of modulation may also occur in real neural systems. Full article
Show Figures

Figure 1

374 KiB  
Article
Multivariate Dependence beyond Shannon Information
by Ryan G. James and James P. Crutchfield
Entropy 2017, 19(10), 531; https://doi.org/10.3390/e19100531 - 07 Oct 2017
Cited by 37 | Viewed by 10028
Abstract
Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that [...] Read more.
Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that Shannon information measures (entropy and mutual information, in their conditional and multivariate forms) can fail to accurately ascertain multivariate dependencies due to their conflation of qualitatively different relations among variables. This has broad implications, particularly when employing information to express the organization and mechanisms embedded in complex systems, including the burgeoning efforts to combine complex network theory with information theory. Here, we do not suggest that any aspect of information theory is wrong. Rather, the vast majority of its informational measures are simply inadequate for determining the meaningful relationships among variables within joint probability distributions. We close by demonstrating that such distributions exist across an arbitrary set of variables. Full article
Show Figures

Figure 1

459 KiB  
Article
Bivariate Partial Information Decomposition: The Optimization Perspective
by Abdullah Makkeh, Dirk Oliver Theis and Raul Vicente
Entropy 2017, 19(10), 530; https://doi.org/10.3390/e19100530 - 07 Oct 2017
Cited by 18 | Viewed by 4581
Abstract
Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information M I ( X : Y , Z ) into shared, synergistic, and unique information by way of solving a convex optimization problem. In [...] Read more.
Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information M I ( X : Y , Z ) into shared, synergistic, and unique information by way of solving a convex optimization problem. In this paper, we discuss the solution of their Convex Program from theoretical and practical points of view. Full article
Show Figures

Figure 1

325 KiB  
Article
Coarse-Graining and the Blackwell Order
by Johannes Rauh, Pradeep Kr. Banerjee, Eckehard Olbrich, Jürgen Jost, Nils Bertschinger and David Wolpert
Entropy 2017, 19(10), 527; https://doi.org/10.3390/e19100527 - 06 Oct 2017
Cited by 11 | Viewed by 6436
Abstract
Suppose we have a pair of information channels, κ 1 , κ 2 , with a common input. The Blackwell order is a partial order over channels that compares κ 1 and κ 2 by the maximal expected utility an agent can obtain [...] Read more.
Suppose we have a pair of information channels, κ 1 , κ 2 , with a common input. The Blackwell order is a partial order over channels that compares κ 1 and κ 2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. Equivalently, κ 1 is said to be Blackwell-inferior to κ 2 if and only if κ 1 can be constructed by garbling the output of κ 2 . A related partial order stipulates that κ 2 is more capable than κ 1 if the mutual information between the input and output is larger for κ 2 than for κ 1 for any distribution over inputs. A Blackwell-inferior channel is necessarily less capable. However, examples are known where κ 1 is less capable than κ 2 but not Blackwell-inferior. We show that this may even happen when κ 1 is constructed by coarse-graining the inputs of κ 2 . Such a coarse-graining is a special kind of “pre-garbling” of the channel inputs. This example directly establishes that the expected value of the shared utility function for the coarse-grained channel is larger than it is for the non-coarse-grained channel. This contradicts the intuition that coarse-graining can only destroy information and lead to inferior channels. We also discuss our results in the context of information decompositions. Full article
Show Figures

Figure 1

4808 KiB  
Article
Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition
by Michael Wibral, Conor Finn, Patricia Wollstadt, Joseph T. Lizier and Viola Priesemann
Entropy 2017, 19(9), 494; https://doi.org/10.3390/e19090494 - 14 Sep 2017
Cited by 42 | Viewed by 7876
Abstract
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information [...] Read more.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems. Full article
Show Figures

Figure 1

297 KiB  
Article
The Partial Information Decomposition of Generative Neural Network Models
by Tycho M.S. Tax, Pedro A.M. Mediano and Murray Shanahan
Entropy 2017, 19(9), 474; https://doi.org/10.3390/e19090474 - 06 Sep 2017
Cited by 39 | Viewed by 8424
Abstract
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch [...] Read more.
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations. Full article
Show Figures

Figure 1

5481 KiB  
Article
Information Theoretical Study of Cross-Talk Mediated Signal Transduction in MAPK Pathways
by Alok Kumar Maity, Pinaki Chaudhury and Suman K. Banik
Entropy 2017, 19(9), 469; https://doi.org/10.3390/e19090469 - 05 Sep 2017
Cited by 6 | Viewed by 3842
Abstract
Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals [...] Read more.
Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals cross-talk is responsible for developing coordinated fluctuations between the pathways. The extent of correlation evaluated in terms of the information theoretic measure provides directionality to net information propagation. Stochastic time series suggest that the cross-talk generates synchronisation in a cell. In addition, the cross-interaction develops correlation between two different phosphorylated kinases expressed in each of the cells in a population of genetically identical cells. Depending on the number of inputs and outputs, we identify signal integration and signal bifurcation motif that arise due to inter-pathway connectivity in the composite network. Analysis using partial information decomposition, an extended formalism of multivariate information calculation, also quantifies the net synergy in the information propagation through the branched pathways. Under this formalism, signature of synergy or redundancy is observed due to the architectural difference in the branched pathways. Full article
Show Figures

Figure 1

1792 KiB  
Article
Morphological Computation: Synergy of Body and Brain
by Keyan Ghazi-Zahedi, Carlotta Langer and Nihat Ay
Entropy 2017, 19(9), 456; https://doi.org/10.3390/e19090456 - 31 Aug 2017
Cited by 16 | Viewed by 6390
Abstract
There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that [...] Read more.
There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that would otherwise have to be conducted by the brain. In this paper, we argue for a synergistic perspective, and by that we mean that Morphological Computation is a process which requires a close interaction of body and brain. Based on a model of the sensorimotor loop, we study a new measure of synergistic information and show that it is more reliable in cases in which there is no synergistic information, compared to previous results. Furthermore, we discuss an algorithm that allows the calculation of the measure in non-trivial (non-binary) systems. Full article
Show Figures

Figure 1

1509 KiB  
Article
Invariant Components of Synergy, Redundancy, and Unique Information among Three Variables
by Giuseppe Pica, Eugenio Piasini, Daniel Chicharro and Stefano Panzeri
Entropy 2017, 19(9), 451; https://doi.org/10.3390/e19090451 - 28 Aug 2017
Cited by 20 | Viewed by 6678
Abstract
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among [...] Read more.
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems. Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to describe the relationships among all PID lattices. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the system’s entropy, hence it characterizes completely all the interdependencies in the system. Full article
Show Figures

Figure 1

1829 KiB  
Article
Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
by Luca Faes, Daniele Marinazzo and Sebastiano Stramaglia
Entropy 2017, 19(8), 408; https://doi.org/10.3390/e19080408 - 08 Aug 2017
Cited by 67 | Viewed by 10325
Abstract
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information [...] Read more.
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone. Full article
Show Figures

Figure 1

260 KiB  
Article
On Extractable Shared Information
by Johannes Rauh, Pradeep Kr. Banerjee, Eckehard Olbrich, Jürgen Jost and Nils Bertschinger
Entropy 2017, 19(7), 328; https://doi.org/10.3390/e19070328 - 03 Jul 2017
Cited by 19 | Viewed by 4566
Abstract
We consider the problem of quantifying the information shared by a pair of random variables X 1 , X 2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that [...] Read more.
We consider the problem of quantifying the information shared by a pair of random variables X 1 , X 2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f ( S ) for any function f. We show that our measure leads to a new nonnegative decomposition of the mutual information I ( S ; X 1 X 2 ) into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic. Full article
588 KiB  
Article
Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal
by Robin A. A. Ince
Entropy 2017, 19(7), 318; https://doi.org/10.3390/e19070318 - 29 Jun 2017
Cited by 84 | Viewed by 9713
Abstract
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables [...] Read more.
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables X i . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the X i . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples. Full article
Show Figures

Figure 1

Back to TopTop