The Neural Basis of Multisensory Plasticity

A special issue of Brain Sciences (ISSN 2076-3425). This special issue belongs to the section "Sensory and Motor Neuroscience".

Deadline for manuscript submissions: closed (25 January 2023) | Viewed by 20040

Special Issue Editor

Key Laboratory of Brain Functional Genomics, Ministry of Education, East China Normal University, No. 3663 Zhongshan North Road, Shanghai 200062, China
Interests: multisensory; self-motion; heading perception; vestibular; visual; primate; electrophysiology; microstimulation; reversal inactivation; causal link

Special Issue Information

Dear Colleagues,

Multisensory plasticity is central for our perception and action, enabling our senses to calibrate dynamically across different modalities and adapt to the external environment, thus providing stable and flexible brain functions. It was observed over different temporal scales ranging from short-term multisensory learning to lifespan cross-modal change. However, the precise neural mechanisms of the multisensory plasticity have not been fully understood, as the experience and training can affect the process of multisensory integration, while multisensory training can also drive functional and structural plasticity. The themes of this Special Issue can include any aspect of multisensory plasticity such as:

  • How multisensory perception and action are affected by altered short-term sensory experience or long-term sensory and motor disabilities;
  • The underlying neural networks during multisensory plasticity through functional imaging;
  • The neural activities correlated with the multisensory plasticity behavior;
  • The developmental change of multisensory perception and its associated neural circuit changes;
  • The molecular and cellular mechanisms that underlie multisensory plastic behavior;
  • The computational principles and modeling on multisensory plasticity operations and the network dynamics.

Dr. Aihua Chen
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Brain Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multisensory plasticity
  • neural circuit
  • functional imaging
  • electrophysiology
  • computational modeling

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 1984 KiB  
Article
Neural Indicators of Visual Andauditory Recognition of Imitative Words on Different De-Iconization Stages
by Liubov Tkacheva, Maria Flaksman, Yulia Sedelkina, Yulia Lavitskaya, Andrey Nasledov and Elizaveta Korotaevskaya
Brain Sci. 2023, 13(4), 681; https://doi.org/10.3390/brainsci13040681 - 19 Apr 2023
Viewed by 1083
Abstract
The research aims to reveal neural indicators of recognition for iconic words and the possible cross-modal multisensory integration behind this process. The goals of this research are twofold: (1) to register event-related potentials (ERP) in the brain in the process of visual and [...] Read more.
The research aims to reveal neural indicators of recognition for iconic words and the possible cross-modal multisensory integration behind this process. The goals of this research are twofold: (1) to register event-related potentials (ERP) in the brain in the process of visual and auditory recognition of Russian imitative words on different de-iconization stages; and (2) to establish whether differences in the brain activity arise while processing visual and auditory stimuli of different nature. Sound imitative (onomatopoeic, mimetic, and ideophonic) words are words with iconic correlation between form and meaning (iconicity being a relationship of resemblance). Russian adult participants (n = 110) were presented with 15 stimuli both visually and auditorily. The stimuli material was equally distributed into three groups according to the criterion of (historical) iconicity loss: five explicit sound imitative (SI) words, five implicit SI words and five non-SI words. It was established that there was no statistically significant difference between visually presented explicit or implicit SI words and non-SI words respectively. However, statistically significant differences were registered for auditorily presented explicit SI words in contrast to implicit SI words in the N400 ERP component, as well as implicit SI words in contrast to non-SI words in the P300 ERP component. We thoroughly analyzed the integrative brain activity in response to explicit IS words and compared it to that in response to implicit SI and non-SI words presented auditorily. The data yielded by this analysis showed the N400 ERP component was more prominent during the recognition process of the explicit SI words received from the central channels (specifically Cz). We assume that these results indicate a specific brain response associated with directed attention in the process of performing cognitive decision making tasks regarding explicit and implicit SI words presented auditorily. This may reflect a higher level of cognitive complexity in identifying this type of stimuli considering the experimental task challenges that may involve cross-modal integration process. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

11 pages, 8886 KiB  
Article
Network-Based Differences in Top–Down Multisensory Integration between Adult ADHD and Healthy Controls—A Diffusion MRI Study
by Marcel Schulze, Behrem Aslan, Ezequiel Farrher, Farida Grinberg, Nadim Shah, Markus Schirmer, Alexander Radbruch, Tony Stöcker, Silke Lux and Alexandra Philipsen
Brain Sci. 2023, 13(3), 388; https://doi.org/10.3390/brainsci13030388 - 23 Feb 2023
Viewed by 1641
Abstract
Background: Attention-deficit–hyperactivity disorder (ADHD) is a neurodevelopmental disorder neurobiologically conceptualized as a network disorder in white and gray matter. A relatively new branch in ADHD research is sensory processing. Here, altered sensory processing i.e., sensory hypersensitivity, is reported, especially in the auditory domain. [...] Read more.
Background: Attention-deficit–hyperactivity disorder (ADHD) is a neurodevelopmental disorder neurobiologically conceptualized as a network disorder in white and gray matter. A relatively new branch in ADHD research is sensory processing. Here, altered sensory processing i.e., sensory hypersensitivity, is reported, especially in the auditory domain. However, our perception is driven by a complex interplay across different sensory modalities. Our brain is specialized in binding those different sensory modalities to a unified percept—a process called multisensory integration (MI) that is mediated through fronto-temporal and fronto-parietal networks. MI has been recently described to be impaired for complex stimuli in adult patients with ADHD. The current study relates MI in adult ADHD with diffusion-weighted imaging. Connectome-based and graph-theoretic analysis was applied to investigate a possible relationship between the ability to integrate multimodal input and network-based ADHD pathophysiology. Methods: Multishell, high-angular resolution diffusion-weighted imaging was performed on twenty-five patients with ADHD (six females, age: 30.08 (SD: 9.3) years) and twenty-four healthy controls (nine females; age: 26.88 (SD: 6.3) years). Structural connectome was created and graph theory was applied to investigate ADHD pathophysiology. Additionally, MI scores, i.e., the percentage of successful multisensory integration derived from the McGurk paradigm, were groupwise correlated with the structural connectome. Results: Structural connectivity was elevated in patients with ADHD in network hubs mirroring altered default-mode network activity typically reported for patients with ADHD. Compared to controls, MI was associated with higher connectivity in ADHD between Heschl’s gyrus and auditory parabelt regions along with altered fronto-temporal network integrity. Conclusion: Alterations in structural network integrity in adult ADHD can be extended to multisensory behavior. MI and the respective network integration in ADHD might represent the maturational cortical delay that extends to adulthood with respect to sensory processing. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

21 pages, 2113 KiB  
Article
The Contribution of Visual and Auditory Working Memory and Non-Verbal IQ to Motor Multisensory Processing in Elementary School Children
by Areej A. Alhamdan, Melanie J. Murphy, Hayley E. Pickering and Sheila G. Crewther
Brain Sci. 2023, 13(2), 270; https://doi.org/10.3390/brainsci13020270 - 05 Feb 2023
Cited by 1 | Viewed by 2668
Abstract
Although cognitive abilities have been shown to facilitate multisensory processing in adults, the development of cognitive abilities such as working memory and intelligence, and their relationship to multisensory motor reaction times (MRTs), has not been well investigated in children. Thus, the aim of [...] Read more.
Although cognitive abilities have been shown to facilitate multisensory processing in adults, the development of cognitive abilities such as working memory and intelligence, and their relationship to multisensory motor reaction times (MRTs), has not been well investigated in children. Thus, the aim of the current study was to explore the contribution of age-related cognitive abilities in elementary school-age children (n = 75) aged 5–10 years, to multisensory MRTs in response to auditory, visual, and audiovisual stimuli, and a visuomotor eye–hand co-ordination processing task. Cognitive performance was measured on classical working memory tasks such as forward and backward visual and auditory digit spans, and the Raven’s Coloured Progressive Matrices (RCPM test of nonverbal intelligence). Bayesian Analysis revealed decisive evidence for age-group differences across grades on visual digit span tasks and RCPM scores but not on auditory digit span tasks. The results also showed decisive evidence for the relationship between performance on more complex visually based tasks, such as difficult items of the RCPM and visual digit span, and multisensory MRT tasks. Bayesian regression analysis demonstrated that visual WM digit span tasks together with nonverbal IQ were the strongest unique predictors of multisensory processing. This suggests that the capacity of visual memory rather than auditory processing abilities becomes the most important cognitive predictor of multisensory MRTs, and potentially contributes to the expected age-related increase in cognitive abilities and multisensory motor processing. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Graphical abstract

14 pages, 1118 KiB  
Article
Graphemic and Semantic Pathways of Number–Color Synesthesia: A Dissociation of Conceptual Synesthesia Mechanisms
by Shimeng Yue and Lihan Chen
Brain Sci. 2022, 12(10), 1400; https://doi.org/10.3390/brainsci12101400 - 17 Oct 2022
Viewed by 1970
Abstract
Number–color synesthesia is a condition in which synesthetes perceive numbers with concurrent experience of specific, corresponding colors. It has been proposed that synesthetic association exists primarily between representations of Arabic digit graphemes and colors, and a secondary, semantic connection between numerosity and colors [...] Read more.
Number–color synesthesia is a condition in which synesthetes perceive numbers with concurrent experience of specific, corresponding colors. It has been proposed that synesthetic association exists primarily between representations of Arabic digit graphemes and colors, and a secondary, semantic connection between numerosity and colors is built via repeated co-activation. However, this distinction between the graphemic and semantic pathways of synesthetic number–color connection has not been empirically tested. The current study aims to dissociate graphemic and semantic aspects of color activations in number–color synesthesia by comparing their time courses. We adopted a synesthetic priming paradigm with varied stimuli onset asynchronies (SOAs). A number (2–6, prime) was presented in one of three notations: digit, dice, or non-canonical dot pattern, and a color patch (target) appeared with an SOA of 0, 100, 300, 400, or 800 ms. Participants reported the color as quickly as possible. Using the congruency effect (i.e., shorter reaction time when target color matched the synesthetic color of number prime) as an index of synesthetic color activation level, we revealed that the effect from the graphemic pathway is quick and relatively persistent, while the effect from the semantic pathway unfolds at a later stage and is more transient. The dissociation between the graphemic and semantic pathways of synesthesia implies further functional distinction within “conceptual synesthesia”, which has been originally discussed as a unitary phenomenon. This distinction has been demonstrated by the differential time courses of synesthetic color activations, and suggested that a presumed, single type of synesthesia could involve multiple mechanisms. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

15 pages, 2122 KiB  
Article
Visual-Based Spatial Coordinate Dominates Probabilistic Multisensory Inference in Macaque MST-d Disparity Encoding
by Jiawei Zhang, Mingyi Huang, Yong Gu, Aihua Chen and Yuguo Yu
Brain Sci. 2022, 12(10), 1387; https://doi.org/10.3390/brainsci12101387 - 13 Oct 2022
Viewed by 1644
Abstract
Numerous studies have demonstrated that animal brains accurately infer whether multisensory stimuli are from a common source or separate sources. Previous work proposed that the multisensory neurons in the dorsal medial superior temporal area (MST-d) serve as integration or separation encoders determined by [...] Read more.
Numerous studies have demonstrated that animal brains accurately infer whether multisensory stimuli are from a common source or separate sources. Previous work proposed that the multisensory neurons in the dorsal medial superior temporal area (MST-d) serve as integration or separation encoders determined by the tuning–response ratio. However, it remains unclear whether MST-d neurons mainly take a sense input as a spatial coordinate reference for carrying out multisensory integration or separation. Our experimental analysis shows that the preferred tuning response to visual input is generally larger than vestibular according to the Macaque MST-d neuronal recordings. This may be crucial to serving as the base of coordinate reference when the subject perceives moving direction information from two senses. By constructing a flexible Monte-Carlo probabilistic sampling (fMCS) model, we validate this hypothesis that the visual and vestibular cues are more likely to be integrated into a visual-based coordinate rather than vestibular. Furthermore, the property of the tuning gradient also affects decision-making regarding whether the cues should be integrated or not. To a dominant modality, an effective decision is produced by a steep response-tuning gradient of the corresponding neurons, while to a subordinate modality a steep tuning gradient produces a rigid decision with a significant bias to either integration or separation. This work proposes that the tuning response amplitude and tuning gradient jointly modulate which modality serves as the base coordinate for the reference frame and the direction change with which modality is decoded effectively. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

18 pages, 2061 KiB  
Article
Transcranial Direct-Current Stimulation Does Not Affect Implicit Sensorimotor Adaptation: A Randomized Sham-Controlled Trial
by Huijun Wang and Kunlin Wei
Brain Sci. 2022, 12(10), 1325; https://doi.org/10.3390/brainsci12101325 - 29 Sep 2022
Cited by 2 | Viewed by 1744
Abstract
Humans constantly calibrate their sensorimotor system to accommodate environmental changes, and this perception-action integration is extensively studied using sensorimotor adaptation paradigms. The cerebellum is one of the key brain regions for sensorimotor adaptation, but previous attempts to modulate sensorimotor adaptation with cerebellar transcranial [...] Read more.
Humans constantly calibrate their sensorimotor system to accommodate environmental changes, and this perception-action integration is extensively studied using sensorimotor adaptation paradigms. The cerebellum is one of the key brain regions for sensorimotor adaptation, but previous attempts to modulate sensorimotor adaptation with cerebellar transcranial direct current stimulation (ctDCS) produced inconsistent findings. Since both conscious/explicit learning and procedural/implicit learning are involved in adaptation, researchers have proposed that ctDCS only affects sensorimotor adaptation when implicit learning dominates the overall adaptation. However, previous research had both types of learning co-exist in their experiments without controlling their potential interaction under the influence of ctDCS. Here, we used error clamp perturbation and gradual perturbation, two effective techniques to elicit implicit learning only, to test the ctDCS effect on sensorimotor adaptation. We administrated ctDCS to independent groups of participants while they implicitly adapted to visual errors. In Experiment 1, we found that cerebellar anodal tDCS had no effect on implicit adaptation induced by error clamp. In Experiment 2, we applied both anodal and cathodal stimulation and used a smaller error clamp to prevent a potential ceiling effect, and replicated the null effect. In Experiment 3, we used gradually imposed visual errors to elicit implicit adaptation but still found no effect of anodal tDCS. With a total of 174 participants, we conclude that the previous inconsistent tDCS effect on sensorimotor adaptation cannot be explained by the relative contribution of implicit learning. Given that the cerebellum is simultaneously involved in explicit and implicit learning, our results suggest that the complex interplay between the two learning processes and large individual differences associated with this interplay might contribute to the inconsistent findings from previous studies on ctDCS and sensorimotor adaptation. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

17 pages, 2740 KiB  
Article
Audiovisual Emotional Congruency Modulates the Stimulus-Driven Cross-Modal Spread of Attention
by Minran Chen, Song Zhao, Jiaqi Yu, Xuechen Leng, Mengdie Zhai, Chengzhi Feng and Wenfeng Feng
Brain Sci. 2022, 12(9), 1229; https://doi.org/10.3390/brainsci12091229 - 10 Sep 2022
Cited by 2 | Viewed by 1619
Abstract
It has been reported that attending to stimuli in visual modality can spread to task-irrelevant but synchronously presented stimuli in auditory modality, a phenomenon termed the cross-modal spread of attention, which could be either stimulus-driven or representation-driven depending on whether the visual constituent [...] Read more.
It has been reported that attending to stimuli in visual modality can spread to task-irrelevant but synchronously presented stimuli in auditory modality, a phenomenon termed the cross-modal spread of attention, which could be either stimulus-driven or representation-driven depending on whether the visual constituent of an audiovisual object is further selected based on the object representation. The stimulus-driven spread of attention occurs whenever a task-irrelevant sound synchronizes with an attended visual stimulus, regardless of the cross-modal semantic congruency. The present study recorded event-related potentials (ERPs) to investigate whether the stimulus-driven cross-modal spread of attention could be modulated by audio-visual emotional congruency in a visual oddball task where emotion (positive/negative) was task-irrelevant. The results first demonstrated a prominent stimulus-driven spread of attention regardless of audio-visual emotional congruency by showing that for all audiovisual pairs, the extracted ERPs to the auditory constituents of audiovisual stimuli within the time window of 200–300 ms were significantly larger than ERPs to the same auditory stimuli delivered alone. However, the amplitude of this stimulus-driven auditory Nd component during 200–300 ms was significantly larger for emotionally incongruent than congruent audiovisual stimuli when their visual constituents’ emotional valences were negative. Moreover, the Nd was sustained during 300–400 ms only for the incongruent audiovisual stimuli with emotionally negative visual constituents. These findings suggest that although the occurrence of the stimulus-driven cross-modal spread of attention is independent of audio-visual emotional congruency, its magnitude is nevertheless modulated even when emotion is task-irrelevant. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 4197 KiB  
Review
Intermodulation from Unisensory to Multisensory Perception: A Review
by Shen Xu, Xiaolin Zhou and Lihan Chen
Brain Sci. 2022, 12(12), 1617; https://doi.org/10.3390/brainsci12121617 - 25 Nov 2022
Viewed by 1530
Abstract
Previous intermodulation (IM) studies have employed two (or more) temporal modulations of a stimulus, with different local elements of the stimulus being modulated by different frequencies. Brain activities of IM obtained mainly from electroencephalograms (EEG) have been analyzed in the frequency domain. As [...] Read more.
Previous intermodulation (IM) studies have employed two (or more) temporal modulations of a stimulus, with different local elements of the stimulus being modulated by different frequencies. Brain activities of IM obtained mainly from electroencephalograms (EEG) have been analyzed in the frequency domain. As a powerful tool, IM, which can provide a direct and objective physiological measure of neural interaction, has emerged as a promising method to decipher neural interactions in visual perception, and reveal the underlying different perceptual processing levels. In this review, we summarize the recent applications of IM in visual perception, detail the protocols and types of IM, and extend its utility and potential applications to the multisensory domain. We propose that using IM could prevail in partially revealing the potential hierarchical processing of multisensory information and contribute to a deeper understanding of the underlying brain dynamics. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

18 pages, 1077 KiB  
Review
Changing the Tendency to Integrate the Senses
by Saul I. Quintero, Ladan Shams and Kimia Kamal
Brain Sci. 2022, 12(10), 1384; https://doi.org/10.3390/brainsci12101384 - 13 Oct 2022
Cited by 7 | Viewed by 2109
Abstract
Integration of sensory signals that emanate from the same source, such as the visual of lip articulations and the sound of the voice of a speaking individual, can improve perception of the source signal (e.g., speech). Because momentary sensory inputs are typically corrupted [...] Read more.
Integration of sensory signals that emanate from the same source, such as the visual of lip articulations and the sound of the voice of a speaking individual, can improve perception of the source signal (e.g., speech). Because momentary sensory inputs are typically corrupted with internal and external noise, there is almost always a discrepancy between the inputs, facing the perceptual system with the problem of determining whether the two signals were caused by the same source or different sources. Thus, whether or not multisensory stimuli are integrated and the degree to which they are bound is influenced by factors such as the prior expectation of a common source. We refer to this factor as the tendency to bind stimuli, or for short, binding tendency. In theory, the tendency to bind sensory stimuli can be learned by experience through the acquisition of the probabilities of the co-occurrence of the stimuli. It can also be influenced by cognitive knowledge of the environment. The binding tendency varies across individuals and can also vary within an individual over time. Here, we review the studies that have investigated the plasticity of binding tendency. We discuss the protocols that have been reported to produce changes in binding tendency, the candidate learning mechanisms involved in this process, the possible neural correlates of binding tendency, and outstanding questions pertaining to binding tendency and its plasticity. We conclude by proposing directions for future research and argue that understanding mechanisms and recipes for increasing binding tendency can have important clinical and translational applications for populations or individuals with a deficiency in multisensory integration. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

13 pages, 598 KiB  
Review
Multisensory Integration in Caenorhabditis elegans in Comparison to Mammals
by Yanxun V. Yu, Weikang Xue and Yuanhua Chen
Brain Sci. 2022, 12(10), 1368; https://doi.org/10.3390/brainsci12101368 - 09 Oct 2022
Cited by 2 | Viewed by 2769
Abstract
Multisensory integration refers to sensory inputs from different sensory modalities being processed simultaneously to produce a unitary output. Surrounded by stimuli from multiple modalities, animals utilize multisensory integration to form a coherent and robust representation of the complex environment. Even though multisensory integration [...] Read more.
Multisensory integration refers to sensory inputs from different sensory modalities being processed simultaneously to produce a unitary output. Surrounded by stimuli from multiple modalities, animals utilize multisensory integration to form a coherent and robust representation of the complex environment. Even though multisensory integration is fundamentally essential for animal life, our understanding of the underlying mechanisms, especially at the molecular, synaptic and circuit levels, remains poorly understood. The study of sensory perception in Caenorhabditis elegans has begun to fill this gap. We have gained a considerable amount of insight into the general principles of sensory neurobiology owing to C. elegans’ highly sensitive perceptions, relatively simple nervous system, ample genetic tools and completely mapped neural connectome. Many interesting paradigms of multisensory integration have been characterized in C. elegans, for which input convergence occurs at the sensory neuron or the interneuron level. In this narrative review, we describe some representative cases of multisensory integration in C. elegans, summarize the underlying mechanisms and compare them with those in mammalian systems. Despite the differences, we believe C. elegans is able to provide unique insights into how processing and integrating multisensory inputs can generate flexible and adaptive behaviors. With the emergence of whole brain imaging, the ability of C. elegans to monitor nearly the entire nervous system may be crucial for understanding the function of the brain as a whole. Full article
(This article belongs to the Special Issue The Neural Basis of Multisensory Plasticity)
Show Figures

Figure 1

Back to TopTop