entropy-logo

Journal Browser

Journal Browser

Information Theory in Computational Biology

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 26468

Special Issue Editors


E-Mail Website
Guest Editor
The School of Business Administration, Bar-Ilan University, Ramat Gan 5290002, Israel
Interests: information networks; complex networks; information science; machine learning; computational biology

E-Mail Website
Guest Editor
The School of Business Administration, Bar-Ilan University, Ramat Gan 5290002, Israel
Interests: computational biology; bioinformatics; network science; information science; complexity; meta-analysis of science; communication

Special Issue Information

Dear Colleagues,

Starting with Claude Shannon’s foundational work in 1948, the field of Information Theory, key to statistical learning and inference, has shaped a wide range of scientific disciplines. Concepts including self-information, entropy, and mutual information have guided the progress of research ranging from physics to the biological sciences. In recent decades, Information Theory has contributed to significant advances in Computational Biology and Bioinformatics across a broad range of topics.

We are pleased to invite submissions to this Special Issue of Entropy, with the theme “Information Theory in Computational Biology”. Submissions can include, but are not limited to, the following research areas: sequencing, sequence comparison, and error correction; gene expression and transcriptomics; biological networks; omics analyses; genome-wide disease-gene association mapping; and protein sequence, structure, and interaction analysis.

Topics that are particularly welcome include analyses, and/or development of application tools, involving single-cell data; multi-omics integration; biological networks; human health; high-dimensional statistical theory for biological applications; unifying definitions and interpretations of statistical interactions; adaptation of existing information theoretic test statistics and estimators for cases involving missing, erroneous, or heterogeneous data; analyses when distributions of the test statistics under the null and the alternative hypotheses are unknown; biologically inspired information storage; and efficient analysis of very large datasets.

Submitted manuscripts should present original work, and may describe novel algorithms, methods, metrics, applications, tools, platforms, and other resources that apply Information Theory principles to advance the field of Computational Biology. We encourage the rigorous comparison of original work with existing methods. We also welcome survey papers, as well as essays reflecting on theory, controversies, and/or the state of current research involving the application of Information Theory concepts to Computational Biology, and providing informed recommendations to advance this research. (Please limit commentary/perspective articles to 3,000 words.)

Thanks for considering this publication opportunity. We look forward to your submission!

Requests for extensions to the stated manuscript submission deadline will be considered.

Sincerely,

Dr. Alon Bartal
Dr. Kathleen M. Jagodnik
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • bioinformatics
  • biological networks
  • computational biology
  • gene expression and transcriptomics
  • heterogeneous data
  • human health
  • information theory
  • machine learning
  • omics
  • systems biology

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

5 pages, 231 KiB  
Editorial
Progress in and Opportunities for Applying Information Theory to Computational Biology and Bioinformatics
by Alon Bartal and Kathleen M. Jagodnik
Entropy 2022, 24(7), 925; https://doi.org/10.3390/e24070925 - 03 Jul 2022
Cited by 3 | Viewed by 1995
Abstract
This editorial is intended to provide a brief history of the application of Information Theory to the fields of Computational Biology and Bioinformatics; to succinctly summarize the current state of associated research, and open challenges; and to describe the scope of the invited [...] Read more.
This editorial is intended to provide a brief history of the application of Information Theory to the fields of Computational Biology and Bioinformatics; to succinctly summarize the current state of associated research, and open challenges; and to describe the scope of the invited content for this Special Issue of the journal Entropy with the theme of “Information Theory in Computational Biology” [...] Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)

Research

Jump to: Editorial, Review

16 pages, 7306 KiB  
Article
The Weight-Based Feature Selection (WBFS) Algorithm Classifies Lung Cancer Subtypes Using Proteomic Data
by Yangyang Wang, Xiaoguang Gao, Xinxin Ru, Pengzhan Sun and Jihan Wang
Entropy 2023, 25(7), 1003; https://doi.org/10.3390/e25071003 - 29 Jun 2023
Viewed by 1059
Abstract
Feature selection plays an important role in improving the performance of classification or reducing the dimensionality of high-dimensional datasets, such as high-throughput genomics/proteomics data in bioinformatics. As a popular approach with computational efficiency and scalability, information theory has been widely incorporated into feature [...] Read more.
Feature selection plays an important role in improving the performance of classification or reducing the dimensionality of high-dimensional datasets, such as high-throughput genomics/proteomics data in bioinformatics. As a popular approach with computational efficiency and scalability, information theory has been widely incorporated into feature selection. In this study, we propose a unique weight-based feature selection (WBFS) algorithm that assesses selected features and candidate features to identify the key protein biomarkers for classifying lung cancer subtypes from The Cancer Proteome Atlas (TCPA) database and we further explored the survival analysis between selected biomarkers and subtypes of lung cancer. Results show good performance of the combination of our WBFS method and Bayesian network for mining potential biomarkers. These candidate signatures have valuable biological significance in tumor classification and patient survival analysis. Taken together, this study proposes the WBFS method that helps to explore candidate biomarkers from biomedical datasets and provides useful information for tumor diagnosis or therapy strategies. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

17 pages, 3246 KiB  
Article
Protein Is an Intelligent Micelle
by Irena Roterman and Leszek Konieczny
Entropy 2023, 25(6), 850; https://doi.org/10.3390/e25060850 - 26 May 2023
Cited by 7 | Viewed by 1021
Abstract
Interpreting biological phenomena at the molecular and cellular levels reveals the ways in which information that is specific to living organisms is processed: from the genetic record contained in a strand of DNA, to the translation process, and then to the construction of [...] Read more.
Interpreting biological phenomena at the molecular and cellular levels reveals the ways in which information that is specific to living organisms is processed: from the genetic record contained in a strand of DNA, to the translation process, and then to the construction of proteins that carry the flow and processing of information as well as reveal evolutionary mechanisms. The processing of a surprisingly small amount of information, i.e., in the range of 1 GB, contains the record of human DNA that is used in the construction of the highly complex system that is the human body. This shows that what is important is not the quantity of information but rather its skillful use—in other words, this facilitates proper processing. This paper describes the quantitative relations that characterize information during the successive steps of the “biological dogma”, illustrating a transition from the recording of information in a DNA strand to the production of proteins exhibiting a defined specificity. It is this that is encoded in the form of information and that determines the unique activity, i.e., the measure of a protein’s “intelligence”. In a situation of information deficit at the transformation stage of a primary protein structure to a tertiary or quaternary structure, a particular role is served by the environment as a supplier of complementary information, thus leading to the achievement of a structure that guarantees the fulfillment of a specified function. Its quantitative evaluation is possible via using a “fuzzy oil drop” (FOD), particularly with respect to its modified version. This can be achieved when taking into account the participation of an environment other than water in the construction of a specific 3D structure (FOD-M). The next step of information processing on the higher organizational level is the construction of the proteome, where the interrelationship between different functional tasks and organism requirements can be generally characterized by homeostasis. An open system that maintains the stability of all components can be achieved exclusively in a condition of automatic control that is realized by negative feedback loops. This suggests a hypothesis of proteome construction that is based on the system of negative feedback loops. The purpose of this paper is the analysis of information flow in organisms with a particular emphasis on the role of proteins in this process. This paper also presents a model introducing the component of changed conditions and its influence on the protein folding process—since the specificity of proteins is coded in their structure. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

16 pages, 4384 KiB  
Article
Identifying Cancer Driver Pathways Based on the Mouth Brooding Fish Algorithm
by Wei Zhang, Xiaowen Xiang, Bihai Zhao, Jianlin Huang, Lan Yang and Yifu Zeng
Entropy 2023, 25(6), 841; https://doi.org/10.3390/e25060841 - 24 May 2023
Viewed by 965
Abstract
Identifying the driver genes of cancer progression is of great significance in improving our understanding of the causes of cancer and promoting the development of personalized treatment. In this paper, we identify the driver genes at the pathway level via an existing intelligent [...] Read more.
Identifying the driver genes of cancer progression is of great significance in improving our understanding of the causes of cancer and promoting the development of personalized treatment. In this paper, we identify the driver genes at the pathway level via an existing intelligent optimization algorithm, named the Mouth Brooding Fish (MBF) algorithm. Many methods based on the maximum weight submatrix model to identify driver pathways attach equal importance to coverage and exclusivity and assign them equal weight, but those methods ignore the impact of mutational heterogeneity. Here, we use principal component analysis (PCA) to incorporate covariate data to reduce the complexity of the algorithm and construct a maximum weight submatrix model considering different weights of coverage and exclusivity. Using this strategy, the unfavorable effect of mutational heterogeneity is overcome to some extent. Data involving lung adenocarcinoma and glioblastoma multiforme were tested with this method and the results compared with the MDPFinder, Dendrix, and Mutex methods. When the driver pathway size was 10, the recognition accuracy of the MBF method reached 80% in both datasets, and the weight values of the submatrix were 1.7 and 1.89, respectively, which are better than those of the compared methods. At the same time, in the signal pathway enrichment analysis, the important role of the driver genes identified by our MBF method in the cancer signaling pathway is revealed, and the validity of these driver genes is demonstrated from the perspective of their biological effects. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

39 pages, 8736 KiB  
Article
NaRnEA: An Information Theoretic Framework for Gene Set Analysis
by Aaron T. Griffin, Lukas J. Vlahos, Codruta Chiuzan and Andrea Califano
Entropy 2023, 25(3), 542; https://doi.org/10.3390/e25030542 - 21 Mar 2023
Cited by 1 | Viewed by 3037
Abstract
Gene sets are being increasingly leveraged to make high-level biological inferences from transcriptomic data; however, existing gene set analysis methods rely on overly conservative, heuristic approaches for quantifying the statistical significance of gene set enrichment. We created Nonparametric analytical-Rank-based Enrichment Analysis (NaRnEA) to [...] Read more.
Gene sets are being increasingly leveraged to make high-level biological inferences from transcriptomic data; however, existing gene set analysis methods rely on overly conservative, heuristic approaches for quantifying the statistical significance of gene set enrichment. We created Nonparametric analytical-Rank-based Enrichment Analysis (NaRnEA) to facilitate accurate and robust gene set analysis with an optimal null model derived using the information theoretic Principle of Maximum Entropy. By measuring the differential activity of ~2500 transcriptional regulatory proteins based on the differential expression of each protein’s transcriptional targets between primary tumors and normal tissue samples in three cohorts from The Cancer Genome Atlas (TCGA), we demonstrate that NaRnEA critically improves in two widely used gene set analysis methods: Gene Set Enrichment Analysis (GSEA) and analytical-Rank-based Enrichment Analysis (aREA). We show that the NaRnEA-inferred differential protein activity is significantly correlated with differential protein abundance inferred from independent, phenotype-matched mass spectrometry data in the Clinical Proteomic Tumor Analysis Consortium (CPTAC), confirming the statistical and biological accuracy of our approach. Additionally, our analysis crucially demonstrates that the sample-shuffling empirical null models leveraged by GSEA and aREA for gene set analysis are overly conservative, a shortcoming that is avoided by the newly developed Maximum Entropy analytical null model employed by NaRnEA. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

17 pages, 399 KiB  
Article
Information Theory for Biological Sequence Classification: A Novel Feature Extraction Technique Based on Tsallis Entropy
by Robson P. Bonidia, Anderson P. Avila Santos, Breno L. S. de Almeida, Peter F. Stadler, Ulisses Nunes da Rocha, Danilo S. Sanches and André C. P. L. F. de Carvalho
Entropy 2022, 24(10), 1398; https://doi.org/10.3390/e24101398 - 01 Oct 2022
Cited by 2 | Viewed by 1906
Abstract
In recent years, there has been an exponential growth in sequencing projects due to accelerated technological advances, leading to a significant increase in the amount of data and resulting in new challenges for biological sequence analysis. Consequently, the use of techniques capable of [...] Read more.
In recent years, there has been an exponential growth in sequencing projects due to accelerated technological advances, leading to a significant increase in the amount of data and resulting in new challenges for biological sequence analysis. Consequently, the use of techniques capable of analyzing large amounts of data has been explored, such as machine learning (ML) algorithms. ML algorithms are being used to analyze and classify biological sequences, despite the intrinsic difficulty in extracting and finding representative biological sequence methods suitable for them. Thereby, extracting numerical features to represent sequences makes it statistically feasible to use universal concepts from Information Theory, such as Tsallis and Shannon entropy. In this study, we propose a novel Tsallis entropy-based feature extractor to provide useful information to classify biological sequences. To assess its relevance, we prepared five case studies: (1) an analysis of the entropic index q; (2) performance testing of the best entropic indices on new datasets; (3) a comparison made with Shannon entropy and (4) generalized entropies; (5) an investigation of the Tsallis entropy in the context of dimensionality reduction. As a result, our proposal proved to be effective, being superior to Shannon entropy and robust in terms of generalization, and also potentially representative for collecting information in fewer dimensions compared with methods such as Singular Value Decomposition and Uniform Manifold Approximation and Projection. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

12 pages, 555 KiB  
Article
An Information-Theoretic Bound on p-Values for Detecting Communities Shared between Weighted Labeled Graphs
by Predrag Obradovic, Vladimir Kovačević, Xiqi Li and Aleksandar Milosavljevic
Entropy 2022, 24(10), 1329; https://doi.org/10.3390/e24101329 - 21 Sep 2022
Viewed by 1176
Abstract
Extraction of subsets of highly connected nodes (“communities” or modules) is a standard step in the analysis of complex social and biological networks. We here consider the problem of finding a relatively small set of nodes in two labeled weighted graphs that is [...] Read more.
Extraction of subsets of highly connected nodes (“communities” or modules) is a standard step in the analysis of complex social and biological networks. We here consider the problem of finding a relatively small set of nodes in two labeled weighted graphs that is highly connected in both. While many scoring functions and algorithms tackle the problem, the typically high computational cost of permutation testing required to establish the p-value for the observed pattern presents a major practical obstacle. To address this problem, we here extend the recently proposed CTD (“Connect the Dots”) approach to establish information-theoretic upper bounds on the p-values and lower bounds on the size and connectedness of communities that are detectable. This is an innovation on the applicability of CTD, broadening its use to pairs of graphs. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

23 pages, 7494 KiB  
Article
Detecting the Critical States of Type 2 Diabetes Mellitus Based on Degree Matrix Network Entropy by Cross-Tissue Analysis
by Yingke Yang, Zhuanghe Tian, Mengyao Song, Chenxin Ma, Zhenyang Ge and Peiluan Li
Entropy 2022, 24(9), 1249; https://doi.org/10.3390/e24091249 - 05 Sep 2022
Cited by 5 | Viewed by 1608
Abstract
Type 2 diabetes mellitus (T2DM) is a metabolic disease caused by multiple etiologies, the development of which can be divided into three states: normal state, critical state/pre-disease state, and disease state. To avoid irreversible development, it is important to detect the early warning [...] Read more.
Type 2 diabetes mellitus (T2DM) is a metabolic disease caused by multiple etiologies, the development of which can be divided into three states: normal state, critical state/pre-disease state, and disease state. To avoid irreversible development, it is important to detect the early warning signals before the onset of T2DM. However, detecting critical states of complex diseases based on high-throughput and strongly noisy data remains a challenging task. In this study, we developed a new method, i.e., degree matrix network entropy (DMNE), to detect the critical states of T2DM based on a sample-specific network (SSN). By applying the method to the datasets of three different tissues for experiments involving T2DM in rats, the critical states were detected, and the dynamic network biomarkers (DNBs) were successfully identified. Specifically, for liver and muscle, the critical transitions occur at 4 and 16 weeks. For adipose, the critical transition is at 8 weeks. In addition, we found some “dark genes” that did not exhibit differential expression but displayed sensitivity in terms of their DMNE score, which is closely related to the progression of T2DM. The information uncovered in our study not only provides further evidence regarding the molecular mechanisms of T2DM but may also assist in the development of strategies to prevent this disease. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

41 pages, 3410 KiB  
Article
A Comparison of Partial Information Decompositions Using Data from Real and Simulated Layer 5b Pyramidal Cells
by Jim W. Kay, Jan M. Schulz and William A. Phillips
Entropy 2022, 24(8), 1021; https://doi.org/10.3390/e24081021 - 24 Jul 2022
Cited by 3 | Viewed by 2093
Abstract
Partial information decomposition allows the joint mutual information between an output and a set of inputs to be divided into components that are synergistic or shared or unique to each input. We consider five different decompositions and compare their results using data from [...] Read more.
Partial information decomposition allows the joint mutual information between an output and a set of inputs to be divided into components that are synergistic or shared or unique to each input. We consider five different decompositions and compare their results using data from layer 5b pyramidal cells in two different studies. The first study was on the amplification of somatic action potential output by apical dendritic input and its regulation by dendritic inhibition. We find that two of the decompositions produce much larger estimates of synergy and shared information than the others, as well as large levels of unique misinformation. When within-neuron differences in the components are examined, the five methods produce more similar results for all but the shared information component, for which two methods produce a different statistical conclusion from the others. There are some differences in the expression of unique information asymmetry among the methods. It is significantly larger, on average, under dendritic inhibition. Three of the methods support a previous conclusion that apical amplification is reduced by dendritic inhibition. The second study used a detailed compartmental model to produce action potentials for many combinations of the numbers of basal and apical synaptic inputs. Decompositions of the entire data set produce similar differences to those in the first study. Two analyses of decompositions are conducted on subsets of the data. In the first, the decompositions reveal a bifurcation in unique information asymmetry. For three of the methods, this suggests that apical drive switches to basal drive as the strength of the basal input increases, while the other two show changing mixtures of information and misinformation. Decompositions produced using the second set of subsets show that all five decompositions provide support for properties of cooperative context-sensitivity—to varying extents. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

26 pages, 3723 KiB  
Article
Multi-Classifier Fusion Based on MI–SFFS for Cross-Subject Emotion Recognition
by Haihui Yang, Shiguo Huang, Shengwei Guo and Guobing Sun
Entropy 2022, 24(5), 705; https://doi.org/10.3390/e24050705 - 16 May 2022
Cited by 7 | Viewed by 1879
Abstract
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is [...] Read more.
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is to improve the accuracy by enhancing the generalization of features. A Multi-Classifier Fusion method based on mutual information with sequential forward floating selection (MI_SFFS) is proposed. The dataset used in this paper is DEAP, which is a multi-modal open dataset containing 32 EEG channels and multiple other physiological signals. First, high-dimensional features are extracted from 15 EEG channels of DEAP after using a 10 s time window for data slicing. Second, MI and SFFS are integrated as a novel feature-selection method. Then, support vector machine (SVM), k-nearest neighbor (KNN) and random forest (RF) are employed to classify positive and negative emotions to obtain the output probabilities of classifiers as weighted features for further classification. To evaluate the model performance, leave-one-out cross-validation is adopted. Finally, cross-subject classification accuracies of 0.7089, 0.7106 and 0.7361 are achieved by the SVM, KNN and RF classifiers, respectively. The results demonstrate the feasibility of the model by splicing different classifiers’ output probabilities as a portion of the weighted features. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

26 pages, 4285 KiB  
Article
RNA World Modeling: A Comparison of Two Complementary Approaches
by Jaroslaw Synak, Agnieszka Rybarczyk and Jacek Blazewicz
Entropy 2022, 24(4), 536; https://doi.org/10.3390/e24040536 - 11 Apr 2022
Cited by 1 | Viewed by 1968
Abstract
The origin of life remains one of the major scientific questions in modern biology. Among many hypotheses aiming to explain how life on Earth started, RNA world is probably the most extensively studied. It assumes that, in the very beginning, RNA molecules served [...] Read more.
The origin of life remains one of the major scientific questions in modern biology. Among many hypotheses aiming to explain how life on Earth started, RNA world is probably the most extensively studied. It assumes that, in the very beginning, RNA molecules served as both enzymes and as genetic information carriers. However, even if this is true, there are many questions that still need to be answered—for example, whether the population of such molecules could achieve stability and retain genetic information for many generations, which is necessary in order for evolution to start. In this paper, we try to answer this question based on the parasite–replicase model (RP model), which divides RNA molecules into enzymes (RNA replicases) capable of catalyzing replication and parasites that do not possess replicase activity but can be replicated by RNA replicases. We describe the aforementioned system using partial differential equations and, based on the analysis of the simulation, surmise general rules governing its evolution. We also compare this approach with one where the RP system is modeled and implemented using a multi-agent modeling technique. We show that approaching the description and analysis of the RP system from different perspectives (microscopic represented by MAS and macroscopic depicted by PDE) provides consistent results. Therefore, applying MAS does not lead to erroneous results and allows us to study more complex situations where many cases are concerned, which would not be possible through the PDE model. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

19 pages, 2088 KiB  
Review
Revealing the Dynamics of Neural Information Processing with Multivariate Information Decomposition
by Ehren L. Newman, Thomas F. Varley, Vibin K. Parakkattu, Samantha P. Sherrill and John M. Beggs
Entropy 2022, 24(7), 930; https://doi.org/10.3390/e24070930 - 05 Jul 2022
Cited by 8 | Viewed by 3100
Abstract
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions [...] Read more.
The varied cognitive abilities and rich adaptive behaviors enabled by the animal nervous system are often described in terms of information processing. This framing raises the issue of how biological neural circuits actually process information, and some of the most fundamental outstanding questions in neuroscience center on understanding the mechanisms of neural information processing. Classical information theory has long been understood to be a natural framework within which information processing can be understood, and recent advances in the field of multivariate information theory offer new insights into the structure of computation in complex systems. In this review, we provide an introduction to the conceptual and practical issues associated with using multivariate information theory to analyze information processing in neural circuits, as well as discussing recent empirical work in this vein. Specifically, we provide an accessible introduction to the partial information decomposition (PID) framework. PID reveals redundant, unique, and synergistic modes by which neurons integrate information from multiple sources. We focus particularly on the synergistic mode, which quantifies the “higher-order” information carried in the patterns of multiple inputs and is not reducible to input from any single source. Recent work in a variety of model systems has revealed that synergistic dynamics are ubiquitous in neural circuitry and show reliable structure–function relationships, emerging disproportionately in neuronal rich clubs, downstream of recurrent connectivity, and in the convergence of correlated activity. We draw on the existing literature on higher-order information dynamics in neuronal networks to illustrate the insights that have been gained by taking an information decomposition perspective on neural activity. Finally, we briefly discuss future promising directions for information decomposition approaches to neuroscience, such as work on behaving animals, multi-target generalizations of PID, and time-resolved local analyses. Full article
(This article belongs to the Special Issue Information Theory in Computational Biology)
Show Figures

Figure 1

Back to TopTop