Next Article in Journal
Machine Learning Based on Event-Related EEG of Sustained Attention Differentiates Adults with Chronic High-Altitude Exposure from Healthy Controls
Next Article in Special Issue
Neuroplastic Changes in Addiction Memory—How Music Therapy and Music-Based Intervention May Reduce Craving: A Narrative Review
Previous Article in Journal
Development and Evaluation of Automated Tools for Auditory-Brainstem and Middle-Auditory Evoked Potentials Waves Detection and Annotation
Previous Article in Special Issue
Underlying Music Mechanisms Influencing the Neurology of Pain: An Integrative Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions

1
Department of Music, Northeastern University, Boston, MA 02115, USA
2
Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
*
Author to whom correspondence should be addressed.
Brain Sci. 2022, 12(12), 1676; https://doi.org/10.3390/brainsci12121676
Submission received: 2 November 2022 / Revised: 21 November 2022 / Accepted: 1 December 2022 / Published: 7 December 2022

Abstract

:
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse—quantified here as the phase-locking value (PLV)—after normalizing the PLVs to each musical recording’s detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.

1. Introduction

Music-based interventions (MBIs), such as receptive MBI (i.e., interventions that involve listening to music), have become increasingly of interest for improving well-being across the lifespan [1,2]. Despite the growing inclusion of MBIs into healthcare protocols, meta-analyses suggest that they often produce variable and inconsistent effects on clinical and health-related outcomes [3,4,5,6,7]. Such variability in the efficacy of music-based interventions may arise, in part, from the diversity of protocols that underlie MBIs (e.g., self-selected vs. clinician-selected music), the heterogeneity of clinical populations that are targeted by MBIs, and individual differences in the sensitivity to musical features that constitute the intervention (e.g., rhythm, melody, motor-movement, and social interactions during musical experiences) [4,7,8]. While research has identified key neural networks that contribute to music processing [9,10], little is known about the underlying neurobiological mechanisms that are specifically engaged by MBIs [11,12], and how aging affects neural responses to musical structure (e.g., rhythm, melody, and harmony) [13]. However, understanding how MBIs engage the nervous system and the impact of aging on the neural processing of music has important implications for designing and implementing MBIs; understanding the effects of naturalistic music-listening and -making on brain function, cognitive health, and well-being; and explaining individual outcomes following the intervention [13,14,15].
Music engages multiple neural systems that subserve sensorimotor functions, executive control, reward processing, and vestibular function [9,10,16]. Musical experiences involve a listener’s idiosyncratic musical knowledge, autobiographical memories, affective state, and subjective interpretation of a composer’s and performer’s musical intentions [17,18,19,20,21]. This inherent subjectivity of musical experiences may explain why self-selected music produces enhanced brain responses to music [8,11,22,23]. Neuro-imaging research, for example, has found that listening to self-selected music and music perceived as pleasurable increases activation in and connectivity between auditory and reward systems [22,23]. Furthermore, music that is more familiar and selected by the listener is especially effective at engaging multiple brain areas [24]: one study showed increased functional connectivity between auditory and reward systems when participants listened to self-selected music, with effects increasing after a two-month MBI [11]. These findings may explain why music-based interventions that feature self-selected music yield better clinical outcomes [25], such as in anxiety reduction and improvements in task performance and enjoyment [26].
In addition to engaging auditory, motor, and reward systems, music also engages neural oscillations—patterns of rhythmic activity arising from excitatory–inhibitory neuronal interactions. During music-listening, endogenous oscillations in auditory–motor systems [27,28,29] adapt their activity to rhythmic timescales in music [28,29,30,31,32,33,34]. For example, musical pulse occurs naturally within a frequency range of 0.5–4 Hz, with a prominent frequency generally centered around 2 Hz [34,35]. Musical meter includes the pulse frequency, sub-divisions of the pulse level that occur between 4–8 Hz, and slower beats that group pulse cycles (<2 Hz). These pulse and metrical frequencies overlap with delta (e.g., 0.5–4 Hz) and theta (e.g., 4–8 Hz) bands of endogenous activity generated by the brain [36]. Electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of brain activity taken during music-listening have observed the entrainment of delta and theta responses to the rhythmic structure of music, resulting in increased power, phase-locked responses, and mode-locked responses at frequencies related to pulse and meter [27,31,32,33,36,37]. Such phase- and mode-locked responses have been recorded to auditory stimuli falling within the range of human rhythm [30,33,38,39] and pitch [40,41,42,43] perception, suggesting that neural entrainment may be a general, dynamical property underlying brain function [44].
While music has been shown to entrain neural activity, impairments in neural entrainment to musical features are associated with aging [13,44,45,46,47,48,49]. Relatedly, several neurodegenerative disorders, such as Mild Cognitive Impairment (MCI) and Alzheimer’s disease (AD), are associated with disrupted neural activity across the same frequency bands that are driven by music (e.g., delta, theta) [50,51,52,53]. These findings have motivated the development of non-invasive interventions that aim to entrain aberrant brain activity with rhythmic stimulation to promote healthy cognitive aging [54] and slow disease progression [54,55,56].
The reviewed work suggests that entraining brain activity through music could be an effective strategy for treating neurodegenerative disorders with MBIs. However, research on neural entrainment to music has typically used non-ecological musical stimuli (e.g., artificial auditory rhythms, such as amplitude-modulated tones), or short excerpts of natural music that represent a few prominent pulse frequencies (e.g., [31,33,37,38,48]) that do not represent the rich musical experiences that often constitute MBIs. In contrast to the design of these studies, MBIs, especially those that allow participants to select their own musical materials, employ a range of natural music with rhythmic structures that reflect varying pulse and metrical frequencies. Even within a single piece of music, pulse and tempo fluctuations can occur to communicate expressive intent and large-scale musical structure [57,58,59]. This inherent variability in rhythmic content and fluctuations in tempi poses a methodological challenge for analyzing neural entrainment to music under listening conditions that are more representative of current MBIs (e.g., across musical stimuli that are designed to feature different pulse frequencies or for musical stimuli that are selected by a listener and cannot be experimentally controlled).
Neurodynamical models of musical rhythm, such as gradient-frequency neural networks [36,60,61,62,63,64,65] which simulate the entrainment of neural ensembles to musical rhythm, have successfully explained and predicted neural, behavioral, and psychological responses to music in human listeners [36,39,66,67]. In addition to accounting for human responses to music, these models can also be used for signal-processing of biological signals [68] and for music-feature analysis, including beat-finding, tempo-tracking, and chord estimation [35,62,68]. This suggests that neurodynamical models of music could inform the analysis of neural entrainment to more naturalistic musical stimuli by estimating pulse and meter-related frequencies that are likely to be perceived by human listeners.
In the current study, we investigated younger and older adults’ neural entrainment to musical pulse during a period of non-invasive, audiovisual stimulation that featured self-selected music and music-synchronizing lights. We used a modified version of a neurodynamical model of human pulse perception [36] to estimate the perceived pulse and metrical frequencies for each musical recording, and then measured neural entrainment to musical pulse, after accounting for each musical recording’s unique pulse and metrical frequencies. Motivated by the reviewed work, we predicted that we would observe enhanced neural entrainment to the level of musical pulse in self-selected music, relative to non-rhythmic levels [36,39,69]. Secondly, we predicted that neural responses to the pulse would be relatively stronger at fronto–central electrodes, reflecting synchronized neural activity to music arising from the auditory system [34,70,71]. Finally, we expected younger adults to exhibit stronger neural entrainment to the musical pulse, compared to older adults, given that degraded neural responses to sound are often associated with aging [45,48].

2. Materials and Methods

2.1. Participants

A total of 16 young adults (mean age = 19.81 years, range = 18–22 years; 7 males, 9 females) and 16 older adults (mean age = 70.94 years, range = 55–81 years; 5 males, 11 females) were recruited for a behavioral and electrophysiology (EEG) study in the Music Imaging and Neural Dynamics (MIND) Lab at Northeastern University. Across the younger and older adults, 94% of participants reported a predominant right handedness (30 right handedness, 2 left handedness), and 93% reported a first language of English (28 English, 1 Fulani, 1 Norwegian, 1 Mandarin, and 1 German). The young adults participated in return for course credit at Northeastern University. The older adults were compensated at $20 per hour for their participation. The study was approved by the IRB of Northeastern University (IRB #19-03-20). After informed consent, participants completed a battery of behavioral tasks, as described in the Behavioral Battery section below.

2.2. Procedure

As part of a larger project on the development of a novel music-based intervention for aging, participants underwent a single session of audiovisual stimulation, which consisted of listening to self-selected music while viewing music-synchronized lights. Next, participants completed a visual working-memory task to assess cognitive functioning [54]. Results on this working-memory task will be reported in a separate manuscript.

2.3. Behavioral Battery

Prior to the audiovisual stimulation, participants completed a behavioral battery to assess musical reward and sensitivity (Barcelona Music Reward Questionnaire, BMRQ) [72], musical sophistication (Goldsmiths Musical Sophistication Index, Gold-MSI) [73], and melodic contour perception (Montreal Battery of Evaluation of Amusia, MBEA) [74]. Participants also self-reported basic demographic information, age, sex, medical history (e.g., hearing ability), length and nature of previous musical training, native languages, and handedness.

2.4. Audiovisual Stimulation

Prior to their in-lab session, participants selected six individual musical recordings for a period of naturalistic music listening while viewing light-emitting diode (LED) lights that synchronized to the pulse frequency of the music. Musical recordings were presented to the participants via a Macbook Air running a custom experiment script in PsychoPy Version 3.2.3 [75] and Sennheiser CX 300-II earbuds. Music-synchronized lighting effects were created using a SynchronyTM device (Oscilloscape LLC), that analyzes the rhythmic structure of music in real-time and flashes LED lights synchronized to the music’s pulse. The display pattern on SynchronyTM was set to “color pulse,” and the color palette was set to the third color palette (e.g., a set of cool colors). With these settings, the device softly pulses LED lights at the rate of musical pulse using a color palette of cool colors (e.g., blues, purples, whites) that change at the rate of the measure-level. For the younger adults, the visual stimulation was presented via a WS-2811 LED strip that was affixed to a table located over the legs of the participants in a seated position. During the audiovisual stimulation, younger participants were instructed to remain motionless, listen to the music, and engage with the lights by foveating directly at the LED strip. For the older adults, the visual stimulation was presented on a circular LED frame consisting of a WS-2811 LED strip, a circular metal frame with a 21” diameter, and a fixation cross located at the center of the frame, such that the LED strip was positioned at a 20° visual angle from the observer along the perimeter of the circular frame. This circular display was chosen to be visible to the participant through their peripheral vision, which is more sensitive to small changes in brightness in dim light situations due to the richness of rod cells in the retina at 10–20° from foveation (Purves et al., 2019). During the audiovisual stimulation, older participants were instructed to remain motionless, listen to the music, and engage with the lights by foveating on the fixation cross.

2.5. EEG Recording

Participants’ EEGs were collected using a 64-channel BrainVision system, arranged according to the international 10–20 standard, and PyCorder software. Each EEG was recorded using an online reference of Fp1 and a 5000 Hz sampling rate. The EEG time-series were recorded directly to disk in the BrainVision format, with trigger events that denoted the beginning and end of each musical recording. Impedances were kept <30 kOhm.

2.6. EEG Preprocessing

The raw electroencephalogram was preprocessed using custom MATLAB (R2019a,b) routines and the EEGLab library [76] for MATLAB (versions 2020.1 and 2021.1). An initial epoch of the EEG data was created to remove superfluous activity unrelated to the audiovisual stimulation, i.e., activity that began 1 s prior to the start of the first musical recording and 1 s following the completion of the final musical recording. After this initial epoching, channel data were down-sampled from a 5000 Hz to 1000 Hz sampling rate. The EEG channels were, then, re-referenced to bi-lateral mastoids at electrodes TP9 and TP10. To remove slow drifts and high-frequency activity unrelated to the audiovisual stimulation, EEG data were high-pass filtered at 1 Hz and low-pass filtered at 55 Hz using EEGLab’s Hamming windowed sinc finite impulse response (FIR) filter (eegfiltnew). Residual 60 Hz line noise was removed using CleanLine’s multi-taper filter with a threshold (i.e., p-value) set to 0.05. After filtering, bad channels were identified and removed using a semi-automated procedure, as follows: bad channels were automatically rejected using EEGLab’s joint-probability algorithm using a normalized threshold of 5 standard deviations. Remaining channels were then visualized, and additional noisy channels were removed manually after inspection by several trained research assistants (average number of channels removed per participant = 3.45, SD = 3.10). Following the removal of bad channels, non-linearities in the electroencephalogram were corrected using the artifact subspace reconstruction (ASR) algorithm [77] with a standard deviation threshold set to 20 and k-window set to 0.25–parameters which have been shown to correct artifact-driven non-linearities in EEG data, while preserving brain-related activity [78]. Finally, EEG source decomposition was conducted, using independent components analysis, to remove components that reflected eye and cranial-muscle artifacts. After decomposition, the independent components (ICs) were subsequently classified using ICALabel, a pre-trained, machine learning classifier that computes probabilities for multiple source classes [79]. The ICs classified as an eye or muscle source with a 90% probability were rejected automatically. Remaining ICs were manually inspected by trained research assistants using EEGLAB’s extended IC properties (e.g., IC power spectra, topographies, and time-series) and removed if they contained extensive eye, muscle, or line-noise artifacts (average number of ICs removed per participant = 5.94, SD = 3.82). After removal of ICs, spherical interpolation was used to interpolate previously rejected channels. Finally, the full preprocessed EEG time-series was epoched into six data sets per participant, with each data set reflecting one of the participant’s self-selected musical recordings, to conduct recording-level analyses.

2.7. Music-Feature Analysis: Estimating Pulse Frequencies and Identifying Stable Epochs

Before estimating neural entrainment to musical rhythm, a music-feature analysis of each musical recording was conducted to identify a specific epoch that contained a stable pulse frequency. Identifying a single epoch for each musical recording with a stable pulse frequency for our analysis allowed us to obtain a more accurate estimate of neural entrainment at the pulse frequency by controlling for large tempo changes. Before the analysis of neural entrainment to musical rhythm, a music-feature analysis was conducted to identify an epoch of each musical recording that contained a stable pulse frequency. First, a modified version of the oscillator network model described in Large et al. (2015) was used to estimate pulse and metrical frequencies perceived by human listeners. The audio signal of each musical recording was processed with a middle-ear filter [80], and then complex-domain onset detection [81] was applied to derive an onset signal containing pulses triggered by the onset (i.e., attack) of individual musical events. The oscillator network was driven by the onset signal, and metrical frequencies (e.g., pulse, harmonic and/or subharmonic) were identified from peaks in the oscillator amplitudes. Stable epochs were identified using the same algorithm. The algorithm identified (1) the most salient frequency in the delta range (≤3.5 Hz), (2) the most salient harmonic in the theta range (≥3.5 Hz), and (3) the most salient subharmonic of the pulse. It then identified the first interval of at least 2 min in which all the frequencies remained stable within a few percent. The neural-entrainment analyses that follow were conducted over musical-recording epochs identified by our algorithmic method to have a consistent pulse frequency.

2.8. Neural Entrainment to Rhythm: Phase-Locking Values

Neural entrainment was operationalized as the phase-locking value (PLV) [82] across pulse- and meter-related frequencies—specifically 0.25–5 Hz—between the amplitude envelope of each musical recording and preprocessed EEG data (Figure 1). The amplitude envelope of each musical recording was estimated using the method by [83], as implemented in the multivariate temporal response function (mTRF) toolbox [84] using a sample rate of 1000 Hz. To compute PLVs, first, a complex wavelet transform of the amplitude envelope of the musical recording and each channel of preprocessed EEG data was conducted using logarithmically-spaced complex Morlet wavelets from 0.25–5 Hz, with 15 bins per octave. The number of cycles per wavelet was determined algorithmically by doubling the center frequency of the Morlet wavelet and rounding to the nearest integer. This produced a total of 65 Morlet wavelets with an increasing number of cycles (range of 3–8 cycles) and center frequencies that spanned the range of musical rhythm [36]. The amplitude envelopes of the musical recordings and the EEG-channel data were, then, convolved with the complex Morlet wavelets. Following the convolutions, phase angles were extracted from the complex-numbered time-series to estimate the instantaneous phase of the amplitude envelope of the musical recordings and EEG-channel data for each complex Morlet wavelet. From the phase-angle time-series, neural entrainment to music was quantified as the phase-locking value (PLV) using the following Equation (1), for each EEG channel and each complex Morlet wavelet [81]:
PLV j , k = | n 1 t = 1 n e i ( θ k , t ( 1 ) θ j , k , t   ( 2 ) ) |
Here, PLV j , k is the phase-locking value (PLV) of the j th EEG channel for the k th complex Morlet wavelet. Furthermore, θ k , t ( 1 ) and θ j , k , t   ( 2 ) correspond to the instantaneous phase angles of the amplitude envelope of the music recording and EEG data, respectively; t corresponds to time point, t , in discrete time, e is Euler’s number, and i is the unit imaginary number. The PLV is a scalar value, ( 0   PLV 1 ) , defined as the magnitude of the mean resultant vector calculated from the distribution of EEG-music relative phases. Relatively higher PLVs indicate stronger phase-locking between EEG signals and the amplitude envelope of the musical recording.

2.9. Pulse Normalization of Phase-Locking Values

Because participants selected musical recordings featured different pulse frequencies, we normalized the PLVs for each musical recording to a pulse frequency of 2 Hz prior to second-level analyses (i.e., before averaging PLVs across participants, electrodes, and musical recordings). Normalizing the PLVs to the same pulse frequency allowed us to investigate neural entrainment to musical pulse at the aggregate level in our groups of younger and older adults. For instance, if our music-feature analysis of the musical recording detected a prominent pulse frequency of 2.5 Hz, the phase-locking values would consequently be shifted in the frequency domain from 2.5 Hz to 2. Thus, in this analysis, the dimensionless unit of 2 corresponds to the pulse frequency (henceforth called the “pulse level”), 1 corresponds to a normalized subharmonic frequency (henceforth called the “subharmonic level”), and 4 corresponds to a normalized harmonic frequency (henceforth called the “harmonic level”). Motivated by previous analyses on the neural entrainment to auditory rhythms [71], we also selected a level between the pulse and harmonic level (dimensionless unit 3, henceforth called “off-pulse level”), to investigate whether neural entrainment was stronger at the predicted pulse level, relative to a neighboring, non-pulse level. Finally, for exploratory analyses of the sub-harmonic and harmonic levels, and we also defined an “off-subharmonic level” (unit 1.5) and an “off-harmonic level” (unit 5).
To test our a priori predictions (e.g., neural entrainment would emerge at the pulse level, neural entrainment would be stronger in younger adults) at the group-level, PLVs were averaged across each participants’ musical recordings and all EEG electrodes, yielding one grand-averaged PLV for each participant. For the grand-averaged PLVs, 95% confidence intervals were bootstrapped for each age group using with boot library for R [85] with the normal approximation set to 10,000 samples [86]. In addition to calculating grand-averaged PLVs, we also explored whether PLVs were strongest at fronto–central electrodes, consistent with an auditory response [87], and differed across clusters of EEG channels. Nine electrode clusters, consisting of six electrodes each, used previously in auditory-related EEG research [88], were selected for this analysis. Table 1 presents the nine electrode clusters and their six constituent EEG channels.

2.10. Linear Mixed-Effects Models

As our sample sizes across age groups were unbalanced [89], linear mixed-effect models (LMEs) were implemented using the LME4 and AFEX [90] libraries for R to test for the effects of rhythmic level, age group, and electrode cluster on neural entrainment to music. Across the LMEs, Satterthwaite’s method was used to estimate degrees of freedom for F-Tests and to compute probability values. Calculating standard effect sizes for LME is an on-going area of research [91], and not all types of model objects currently have software support for computing effect sizes for mixed models. For models built using the LME4 library, the semi-partial (marginal) R-squared was calculated as an effect size for fixed effects [91] using the developer version of the r2glmm package for R (accessed on GitHub 24 May 2022). For models built using the AFEX library, unstandardized effects (e.g., mean differences) for contrasts of interest are reported using the emmeans library [92]. The global α-level was set to 0.05. In instances of multiple comparisons for the F-Tests that did not involve a priori hypotheses [93], Holm’s correction was used to control the family-wise error rate and produce corrected p-values [94].

3. Results

3.1. Behavioral Battery Results

Older adults did not significantly differ from younger adults in music perception abilities as assessed using the MBEA, according to Welch’s independent t-test, t(28.932) = 1.458, p = 0.1556, 95% CI = [−0.7806, 4.6556], and Cohen’s D = −0.54. However, consistent with the previous literature [72,73,95], older adults scored significantly lower than younger adults in music reward sensitivity as assessed using the BMRQ, according to Welch’s independent t-test, t(26.028) = 2.4916, p = 0.01942, 95% CI = [1.9869, 20.7131], and Cohen’s D = −0.93, and in general musical sophistication as assessed using the Gold-MSI, according to Welch’s independent t-test, t(28.634) = 3.4111, p = 0.001946, 95% CI = [18.4922, 73.9495], and Cohen’s D = −1.27. Table 2 reports the means and standard deviations of age, BMRQ, Gold-MSI, and MBEA in each group.

3.2. Natural Pulse Frequency of Self-Selected Music Did Not Differ between OA and YA

Out of 96 possible musical recordings per age group (16 participants × 6 musical recordings for each participant), a total of 77 musical recordings for the YA (n = 16, mean number of recordings per participant = 4.81), and 61 musical recordings for the OA (n = 15, mean number of recordings per participant = 4.07) survived the music-feature analysis and were subjected to the neural-entrainment analysis. To assess the variability of and age-related differences in natural pulse frequencies for self-selected music across the younger and older adults, probability density functions were calculated for the natural pulse frequencies for each age group (Figure 2). Both YA and OA exhibited similar distributions in the natural pulse frequencies of their self-selected music, with a mode in their respective distributions arising at ~2 Hz, consistent with previous work suggesting that 2 Hz is a frequently occurring pulse frequency in natural music [35,36]. A Kolmogorov–Smirnov test of the pulse-frequency distributions suggested that the distributions of the natural pulse frequencies did not significantly differ between the YA and OA groups, D = 0.16, p = 0.36, suggesting that older and younger adults selected music with similar natural pulse frequencies.

3.3. Neural Entrainment at the Pulse Level Did Not Differ between OA and YA

Based on prior research, we predicted a priori that, on average, stronger PLVs would emerge at the pulse level, relative to the neighboring off-pulse level [71]. Moreover, we predicted that younger adults would exhibit stronger phase-locking to a musical pulse, relative to older adults [48]. First, we tested these predictions in a global manner, by using the grand-averaged PLVs for each participant in the YA and OA groups. As shown in Figure 3A, when averaged across all electrodes, younger (N =16) and older adults (N = 15) exhibited strong PLVs at the pulse level, manifested as a local maximum in the PLV plot, consistent with this prediction. Contrary to our predictions, however, the grand-averaged PLVs at the level of the pulse appeared comparable across the YA and OA groups. We implemented a LME model to investigate whether neural entrainment was stronger at the pulse level, relative to the off-pulse level, and statistically different across age groups; in particular, we expected to observe a priori a rhythm level*age group interaction, reflecting stronger neural entrainment at the pulse level and in younger participants. For the LME, the grand-averaged PLV for each participant was entered as the criterion variable, and age group (e.g., YA, OA) and rhythmic level (e.g., pulse, off-pulse) were entered as fixed effects with an interaction term. Participants were added as a random effect. As random intercept-and-slope models, with and without correlated intercepts and slopes, failed to converge, participants were ultimately added using a random-intercept model, according to R’s formula notation, as follows:
lmer(PLV~Rhythmic_Level*Age_Group + (1|ID), data = .)
As shown in Table 3a, the LME returned a significant main effect of rhythmic level, F(1,29) = 35.17, p ≤0.001, semi-partial R2 = 0.144, but not a significant main effect of age group, F(1,29) = 0.13, p = 0.72, semi-partial R2 = 0.005, or a rhythmic level*age group interaction, F(1,29) = 0.21, p = 0.65, semi-partial R2 = 0.002, suggesting that neural entrainment significantly differed across the pulse and off-pulse levels, but not age groups. A post-hoc test for the main effect of rhythmic level (Figure 3B) revealed that the PLVs at the pulse level were significantly stronger, relative to the off-pulse level, t(29) = 5.93, p < 0.0001.

3.4. Neural Entrainment to the Pulse Differed across Electrode Clusters and Age Groups

While neural entrainment to musical pulse did not significantly differ across age groups when the PLVs were grand-averaged across all electrodes, examinations of the topographic representations of PLVs (Figure 4A and Figure 5A,B) indicated there may be age-related differences in the underlying neural networks that are entraining to musical pulse and the sensitivity to the concurrent visual stimulation. For instance, YA exhibited stronger PLVs at the pulse level near a cluster of parietal–occipital electrodes (i.e., near electrode cluster O), relative to the off-pulse level. In contrast, OA exhibited a more uniform topography of PLVs at both pulse and off-pulse levels. Next, we tested whether the topographic representation of PLVs interacted with the pulse and off-pulse levels and the YA and OA age groups. We ran an LME with the PLVs as the criterion variable, namely electrode cluster (e.g., nine electrodes clusters; defined in Table 1) age group (e.g., YA, OA), rhythmic level (e.g., pulse, off-pulse) as fixed effects with interaction terms, and participant as a random effect. Because a model with correlated random intercepts and slopes failed to converge, we estimated a LME model using the AFEX library with participants entered as a random effect with uncorrelated random slopes and intercepts—a model which successfully converged:
mixed(PLV~Electrode_Group*Age_Group*Rhythmic_Level + (Electrode_Group + Rhythmic_Level||ID), data = ., expand_re = TRUE)
The LME returned a significant three-way electrode group*rhythmic level*age group interaction (Table 3b), F(8, 3006.57) = 7.53, p < 0.001, p-Holm < 0.001, suggesting that the strength of neural entrainment to music depended on the specific electrode cluster, rhythmic level, and age group (Figure 4B). In particular, younger adults displayed enhanced phase-locking to the pulse level near electrode cluster O. A post-hoc test revealed that younger adults had stronger PLVs to the pulse level, relative to the off-pulse level, at electrode cluster O, t(39) = 4.57, mean difference = 0.0299 (SE = 0.00653). Moreover, the interaction also captured the stronger neural entrainment to the pulse level, in the group of younger adults, at electrode cluster O. A post-hoc test between YA and OA PLVs at electrode cluster O revealed that YA had slightly stronger PLVs at the pulse level, t(191) = 0.842, mean difference = 0.0107 (SE = 0.0128).

3.5. Neural Entrainment at Sub-Harmonic and Harmonic Levels Did Not Differ between OA and YA

In addition to testing our a priori hypotheses for the pulse level, we also conducted exploratory analyses of the neural entrainment to the sub-harmonic (Table 4) and harmonic levels (Table 5) in a series of LME models. Similar to the LME model for the pulse and off-pulse levels, LME models were estimated with electrode cluster (e.g., nine electrodes clusters; defined in Table 1), age group (e.g., YA, OA), rhythmic level (e.g., sub-harmonic vs. off-sub-harmonic or harmonic vs. off-harmonic) as fixed effects with interaction terms, and participant as a random effect with uncorrelated random slopes and intercepts. The LME for the sub-harmonic level revealed a significant two-way interaction (Table 4) between the electrode group and rhythmic level (e.g., sub-harmonic, off-sub-harmonic), F(8, 3013.38) = 8.23, p < 0.001, p-Holm p < 0.001, suggesting that neural entrainment to the sub-harmonic level was dependent on the specific electrode cluster, but not on age group.
Similar to the findings for the pulse level (Table 3), the LME for the harmonic level revealed a significant three-way interaction (Table 5) between electrode group, rhythmic level (e.g., sub-harmonic, off-sub-harmonic), and age group, F(8, 3014.81) = 4.74, p < 0.001, p-Holm < 0.001, suggesting that neural entrainment to the harmonic level was dependent on the specific electrode cluster, rhythmic level, and age group. In particular, OA displayed enhanced phase-locking to the harmonic level near electrode cluster RP. A post-hoc test revealed that older adults have stronger PLVs to the harmonic level, relative to the off-harmonic level, at electrode cluster RP, t(31) = 3.41, mean difference = 0.0314 (SE = 0.00653). Moreover, the interaction also captured the stronger neural entrainment to the harmonic level, in the group of older adults, at electrode cluster RP. A post-hoc test between YA and OA PLVs at electrode cluster RP revealed that OA had slightly stronger PLVs at the harmonic level, t(52) = 1.03, mean difference = 0.0142 (SE = 0.0137).

4. Discussion

Here we investigated neural entrainment to musical pulse in self-selected, naturalistic music in a sample of younger adults and older adults. While previous research has demonstrated that the human nervous system entrains to the rhythmic structure of music [28,29,30,31,32,33], these studies largely employed artificial auditory rhythms or short excerpts of naturalistic music to study neural entrainment to rhythm, limiting their ecological validity. As part of a larger project on the development of a novel music-based intervention (MBI) for aging, in the current study, participants listened to self-selected music during a period of audiovisual stimulation. Participants were explicitly instructed to select their own music for the audiovisual stimulation, as self-selected music represents a more ecologically valid music listening experience that is linked to efficacy of MBIs [25] and increases neural activity across auditory and reward systems [8,11,22,23].
We analyzed the strength of neural entrainment to musical pulse, quantified as the phase-locking value (PLV) between EEG signals and the amplitude envelope of musical recordings, after normalizing the PLV to the pulse level of each self-selected musical recording following a neurodynamical music-feature analysis. Consistent with our predictions, we observed strong neural entrainment (i.e., relatively higher PLVs) at the pulse level. We also observed neural phase-locking in both younger adults and older adults to other hierarchical levels of meter, specifically at the sub-harmonic and harmonic levels of the pulse. Unlike previous work [48], however, the strength of neural entrainment did not reflect a main effect of age at the pulse, sub-harmonic, or harmonic levels, suggesting that neural entrainment to musical pulse and meter may be preserved in aging. Despite no main effect of age, we did observe significant interactions between age and electrode cluster, revealing some age-related differences in neural entrainment at specific channels of EEG activity. In particular, younger adults displayed higher levels of neural entrainment to the pulse level at occipital electrodes, whereas older adults showed stronger neural entrainment to the harmonic level at right temporal electrodes.
The present study adds novel findings to the growing literature of self-selected music listening at a neurobiological level. Despite the cross-sectional design that entails different age groups, the study offers some suggestions about the meaning of neural entrainment in response to music listening across the lifespan. First, while neural entrainment was measured between electrophysiological signals and the amplitude envelope of recorded music, it is possible that the visual stimulation, during the period of audiovisual stimulation, also shaped neural responses at the pulse and other meter-related levels. Indeed, this may partly explain why younger adults had stronger neural entrainment to rhythm in parietal–occipital electrodes, possibly reflecting additional entrainment to the LED lights from the visual cortex, while previous studies with auditory-only stimulation have reported stronger neural entrainment over fronto–central electrodes [33,39]. Secondly, as we permitted participants to select their own music for the audiovisual recording, it is difficult to conclude whether differences in neural entrainment to music across individual participants or across groups (e.g., younger and older adults) are the result of differences in acoustic features of the musical stimuli or differences in endogenous neural function. Nevertheless, our analysis of the natural pulse frequencies for the self-selected music does suggest that the musical stimuli across younger and older adults contained comparable natural pulse frequencies, even prior to the pulse normalization of the PLVs. In future work, we plan to explore additional recording-specific musical features and acoustic differences (e.g., amount of low-frequency content) into our analyses, as these could contribute to individual and group-level differences in neural entrainment to rhythm [96,97]. Acknowledging these limitations of the experimental design, we theorize it is also possible that the topographic differences in neural entrainment to musical pulse could reflect increased sensitivity in younger adults to the visual stimulation that was delivered concurrently with the self-selected music. Indeed, recent EEG results comparing young and older adult groups during light and sound stimulation have shown increased sensitivity in young adults at occipital sites during visual and audiovisual stimulation (Chan et al., 2021), consistent with the idea that young adults are more entrainable than older adults by visual stimulation via lights.
Despite these limitations, this work adds to a growing body of the literature that has begun to elucidate possible neurobiological mechanisms underlying the efficacy of MBIs. Recent work, for instance, has demonstrated that the functional connectivity between auditory and reward systems is largely preserved during early stages of dementia [12], implicating a possible neurobiological substrate that music-based interventions can target in patients with early-stage dementia. Our findings implicate a complementary neurobiological mechanism—albeit in aging adults without dementia—that music can non-invasively target and entrain rhythmic brain activity in frequency bands that are associated with aging and dementia pathology. To conclude, we believe that assessing neural entrainment to a wider range of naturalistic musical stimuli will help further our understanding how individual differences across expertise, engagement, lifespan development, and various disease states affect musical experiences. Further, we believe it would also illuminate how music can function as a form of non-invasive brain stimulation in designing interventions for healthy aging. Future music-based interventions may capitalize on the phase-locking between self-selected music and endogenous brain rhythms, and the phase-amplitude coupling of different frequency bands of endogenous brain rhythms (such as between delta, theta, and gamma bands), to design novel music-based interventions that couple visual stimulation and musical rhythms for gamma-based multimodal brain stimulation (see Tichko et al., 2020).

Author Contributions

Conceptualization, P.T., J.C.K., E.W.L. and P.L.; methodology, P.T., E.W.L. and P.L.; software, P.T., N.P., J.C.K., E.W.L. and P.L.; validation, P.T., N.P., J.C.K., E.W.L. and P.L.; formal analysis, P.T., N.P., J.C.K., E.W.L. and P.L.; investigation, P.T., N.P., J.C.K., E.W.L. and P.L.; resources, P.T., N.P., J.C.K., E.W.L. and P.L.; data curation, P.T., N.P., J.C.K., E.W.L. and P.L.; writing—original draft preparation, P.T., N.P., J.C.K., E.W.L. and P.L.; writing—review and editing, P.T., N.P., J.C.K., E.W.L. and P.L.; visualization, P.T., N.P., J.C.K., E.W.L. and P.L.; supervision, P.T., N.P., J.C.K., E.W.L. and P.L.; project administration, P.T., N.P., J.C.K., E.W.L. and P.L.; funding acquisition, P.L. and E.W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Institutes of NIH R01AG078376, NIH R21AG075232, NIH R43 AG078012, National Science Foundation NSF-CAREER 1945436, NSF-STTR 2014870, Grammy Foundation, and Kim and Glen Campbell Foundation.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Northeastern University (protocol #19-03-20 approved 14 May 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available because they are undergoing further analysis as part of a larger study.

Acknowledgments

We acknowledge MIND Lab research assistants who helped with data collection and EEG pre-processing for this project: Aaron Kang, Felicia Guo, Grace Neale, Anjali Asthagiri, Catherine Zhou, Israel Perez, Itamar Zik, Ritu Amarnani, PhD student Alex Belden, lab manager Milena Quinci, and visiting scholar Marina Emerick.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Cheever, T.; Taylor, A.; Finkelstein, R.; Edwards, E.; Thomas, L.; Bradt, J.; Holochwost, S.J.; Johnson, J.K.; Limb, C.; Patel, A.D.; et al. NIH/Kennedy Center Workshop on Music and the Brain: Finding Harmony. Neuron 2018, 97, 1214–1218. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Global Council on Brain Health. Music on Our Minds: The Rich Potential of Music to Promote Brain Health and Mental Well-Being; Global Council on Brain Health: Washington, DC, USA, 2020. [Google Scholar]
  3. Mammarella, N.; Fairfield, B.; Cornoldi, C. Does music enhance cognitive performance in healthy older adults? The Vivaldi effect. Aging Clin. Exp. Res. 2007, 19, 394–399. [Google Scholar] [CrossRef] [PubMed]
  4. Sousa, L.; Dowson, B.; McDermott, O.; Schneider, J.; Fernandes, L. Music-based interventions in the acute setting for patients with dementia: A systematic review. Eur. Geriatr. Med. 2020, 11, 929–943. [Google Scholar] [CrossRef]
  5. van der Steen, J.T.; Smaling, H.J.; van der Wouden, J.C.; Bruinsma, M.S.; Scholten, R.J.; Vink, A.C. Music-based therapeutic interventions for people with dementia. Cochrane Database Syst. Rev. 2018, 7, CD003477. [Google Scholar] [CrossRef] [Green Version]
  6. Vasionytė, I.; Madison, G. Musical intervention for patients with dementia: A meta-analysis. J. Clin. Nurs. 2013, 22, 1203–1216. [Google Scholar] [CrossRef]
  7. Vink, A.; Hanser, S. Music-Based Therapeutic Interventions for People with Dementia: A Mini-Review. Medicines 2018, 5, 109. [Google Scholar] [CrossRef] [Green Version]
  8. Loui, P. Neuroscientific Insights for Improved Outcomes in Music-based Interventions. Music Sci. 2020, 3, 205920432096506. [Google Scholar] [CrossRef]
  9. Koelsch, S. Brain correlates of music-evoked emotions. Nat. Rev. Neurosci. 2014, 15, 170–180. [Google Scholar] [CrossRef]
  10. Loui, P.; Przysinda, E. Music and the Brain: Areas and Networks. In Routledge Companion Music Cognition; Routledge: London, UK, 2017; pp. 13–24. [Google Scholar]
  11. Quinci, M.A.; Belden, A.; Goutama, V.; Gong, D.; Hanser, S.; Donovan, N.J.; Geddes, M.; Loui, P. Music-Based Intervention Connects Auditory and Reward Systems. bioRxiv 2021. bioRxiv:2021.07.02.450867. [Google Scholar]
  12. Wang, D.; Belden, A.; Hanser, S.B.; Geddes, M.R.; Loui, P. Resting-State Connectivity of Auditory and Reward Systems in Alzheimer’s Disease and Mild Cognitive Impairment. Front. Hum. Neurosci. 2020, 14, 280. [Google Scholar] [CrossRef]
  13. Sutcliffe, R.; Du, K.; Ruffman, T. Music Making and Neuropsychological Aging: A Review. Neurosci. Biobehav. Rev. 2020, 113, 479–491. [Google Scholar] [CrossRef] [PubMed]
  14. Ferreri, L.; Moussard, A.; Bigand, E.; Tillmann, B. Music and the Aging Brain. In The Oxford Handbook of Music and the Brain; Thaut, M.H., Hodges, D.A., Eds.; Oxford University Press: Oxford, UK, 2019; pp. 622–644. [Google Scholar]
  15. Tichko, P.; Kim, J.C.; Large, E.; Loui, P. Integrating music-based interventions with Gamma-frequency stimulation: Implications for healthy ageing. Eur. J. Neurosci. 2020, 55, 15059. [Google Scholar] [CrossRef] [PubMed]
  16. Vuust, P.; Heggli, O.A.; Friston, K.J.; Kringelbach, M.L. Music in the brain. Nat. Rev. Neurosci. 2022, 23, 287–305. [Google Scholar] [CrossRef]
  17. Alluri, V.; Toiviainen, P.; Burunat, I.; Kliuchko, M.; Vuust, P.; Brattico, E. Connectivity patterns during music listening: Evidence for action-based processing in musicians: Connectivity Patterns During Music Listening. Hum. Brain Mapp. 2017, 38, 2955–2970. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Aydogan, G.; Flaig, N.; Ravi, S.N.; Large, E.W.; McClure, S.M.; Margulis, E.H. Overcoming Bias: Cognitive Control Reduces Susceptibility to Framing Effects in Evaluating Musical Performance. Sci. Rep. 2018, 8, 6229. [Google Scholar] [CrossRef] [Green Version]
  19. Janata, P. The Neural Architecture of Music-Evoked Autobiographical Memories. Cereb. Cortex. 2009, 19, 2579–2594. [Google Scholar] [CrossRef] [Green Version]
  20. Juslin, P.N.; Laukka, P. Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening. J. New Music Res. 2004, 33, 217–238. [Google Scholar] [CrossRef]
  21. Sloboda, J.A.; O’Neill, S.A.; Ivaldi, A. Functions of Music in Everyday Life: An Exploratory Study Using the Experience Sampling Method. Music. Sci. 2001, 5, 9–32. [Google Scholar] [CrossRef]
  22. Blood, A.J.; Zatorre, R.J. Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc. Natl. Acad. Sci. USA 2001, 98, 11818–11823. [Google Scholar] [CrossRef] [Green Version]
  23. Salimpoor, V.N.; van den Bosch, I.; Kovacevic, N.; McIntosh, A.R.; Dagher, A.; Zatorre, R.J. Interactions Between the Nucleus Accumbens and Auditory Cortices Predict Music Reward Value. Science 2013, 340, 216–219. [Google Scholar] [CrossRef]
  24. Pereira, C.S.; Teixeira, J.; Figueiredo, P.; Xavier, J.; Castro, S.L.; Brattico, E. Music and Emotions in the Brain: Familiarity Matters. PLoS ONE 2011, 6, e27241. [Google Scholar] [CrossRef] [PubMed]
  25. Leggieri, M.; Thaut, M.H.; Fornazzari, L.; Schweizer, T.A.; Barfett, J.; Munoz, D.G.; Fischer, C.E. Music Intervention Approaches for Alzheimer’s Disease: A Review of the Literature. Front. Neurosci. 2019, 13, 132. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Cassidy, G.; Macdonald, R. The effects of music choice on task performance: A study of the impact of self-selected and experimenter-selected music on driving game performance and experience. Music. Sci. 2009, 13, 357–386. [Google Scholar] [CrossRef]
  27. Arnal, L.H.; Doelling, K.B.; Poeppel, D. Delta–Beta Coupled Oscillations Underlie Temporal Prediction Accuracy. Cereb. Cortex. 2015, 25, 3077–3085. [Google Scholar] [CrossRef] [Green Version]
  28. Fujioka, T.; Trainor, L.J.; Large, E.W.; Ross, B. Internalized Timing of Isochronous Sounds Is Represented in Neuromagnetic Beta Oscillations. J. Neurosci. 2012, 32, 1791–1802. [Google Scholar] [CrossRef] [Green Version]
  29. Nozaradan, S.; Peretz, I.; Mouraux, A. Selective Neuronal Entrainment to the Beat and Meter Embedded in a Musical Rhythm. J. Neurosci. 2012, 32, 17572–17581. [Google Scholar] [CrossRef] [Green Version]
  30. Fujioka, T.; Ross, B.; Trainor, L.J. Beta-Band Oscillations Represent Auditory Beat and Its Metrical Hierarchy in Perception and Imagery. J. Neurosci. 2015, 35, 15187–15198. [Google Scholar] [CrossRef] [Green Version]
  31. Harding, E.E.; Sammler, D.; Henry, M.J.; Large, E.W.; Kotz, S.A. Cortical tracking of rhythm in music and speech. NeuroImage 2019, 185, 96–101. [Google Scholar] [CrossRef]
  32. Stefanics, G.; Hangya, B.; Hernadi, I.; Winkler, I.; Lakatos, P.; Ulbert, I. Phase Entrainment of Human Delta Oscillations Can Mediate the Effects of Expectation on Reaction Speed. J. Neurosci. 2010, 30, 13578–13585. [Google Scholar] [CrossRef]
  33. Will, U.; Berg, E. Brain wave synchronization and entrainment to periodic acoustic stimuli. Neurosci. Lett. 2007, 424, 55–60. [Google Scholar] [CrossRef]
  34. Woods, K.J.; Sampaio, G.; James, T.; Przysinda, E.; Spencer, A.E.; Morillon, B.; Loui, P. Stimulating music supports attention in listeners with attentional difficulties. bioRxiv 2021, 30. [Google Scholar] [CrossRef]
  35. Ding, N.; Patel, A.D.; Chen, L.; Butler, H.; Luo, C.; Poeppel, D. Temporal modulations in speech and music. Neurosci. Biobehav. Rev. 2017, 81, 181–187. [Google Scholar] [CrossRef] [PubMed]
  36. Large, E.W.; Herrera, J.A.; Velasco, M.J. Neural Networks for Beat Perception in Musical Rhythm. Front. Syst. Neurosci. 2015, 9, 159. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Doelling, K.B.; Poeppel, D. Cortical entrainment to music and its modulation by expertise. Proc. Natl. Acad. Sci. USA 2015, 112, E6233–E6242. [Google Scholar] [CrossRef] [Green Version]
  38. Nozaradan, S.; Peretz, I.; Missal, M.; Mouraux, A. Tagging the Neuronal Entrainment to Beat and Meter. J. Neurosci. 2011, 31, 10234–10240. [Google Scholar] [CrossRef] [PubMed]
  39. Tal, I.; Large, E.W.; Rabinovitch, E.; Wei, Y.; Schroeder, C.E.; Poeppel, D.; Zion Golumbic, E. Neural Entrainment to the Beat: The “Missing-Pulse” Phenomenon. J. Neurosci. 2017, 37, 6331–6341. [Google Scholar] [CrossRef]
  40. Lerud, K.D.; Almonte, F.V.; Kim, J.C.; Large, E.W. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals. Hear. Res. 2014, 308, 41–49. [Google Scholar] [CrossRef]
  41. Skoe, E.; Krizman, J.; Spitzer, E.; Kraus, N. The auditory brainstem is a barometer of rapid auditory learning. Neuroscience 2013, 243, 104–114. [Google Scholar] [CrossRef]
  42. Skoe, E.; Kraus, N. Hearing It Again and Again: On-Line Subcortical Plasticity in Humans. PLoS ONE 2010, 5, e13645. [Google Scholar] [CrossRef] [Green Version]
  43. Tichko, P.; Skoe, E. Frequency-dependent fine structure in the frequency-following response: The byproduct of multiple generators. Hear. Res. 2017, 348, 1–15. [Google Scholar] [CrossRef]
  44. Tognoli, E.; Kelso, J.A.S. Brain coordination dynamics: True and false faces of phase synchrony and metastability. Prog. Neurobiol. 2009, 87, 31–40. [Google Scholar] [CrossRef] [PubMed]
  45. Sauvé, S.A.; Bolt, E.L.W.; Nozaradan, S.; Zendel, B.R. Aging effects on neural processing of rhythm and meter. Front. Aging Neurosci. 2022, 14, 848608. [Google Scholar] [CrossRef]
  46. Alain, C.; Zendel, B.R.; Hutka, S.; Bidelman, G.M. Turning down the noise: The benefit of musical training on the aging auditory brain. Hear. Res. 2014, 308, 162–173. [Google Scholar] [CrossRef] [PubMed]
  47. Bones, O.; Plack, C.J. Losing the Music: Aging Affects the Perception and Subcortical Neural Representation of Musical Harmony. J. Neurosci. 2015, 35, 4071–4080. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Henry, M.J.; Herrmann, B.; Kunke, D.; Obleser, J. Aging affects the balance of neural entrainment and top-down neural modulation in the listening brain. Nat. Commun. 2017, 8, 15801. [Google Scholar] [CrossRef] [Green Version]
  49. Zendel, B.R.; Alain, C. Enhanced attention-dependent activity in the auditory cortex of older musicians. Neurobiol. Aging 2014, 35, 55–63. [Google Scholar] [CrossRef]
  50. Goodman, M.S.; Kumar, S.; Zomorrodi, R.; Ghazala, Z.; Cheam, A.S.M.; Barr, M.S.; Daskalakis, Z.J.; Blumberger, D.M.; Fischer, C.; Flint, A.; et al. Theta-Gamma Coupling and Working Memory in Alzheimer’s Dementia and Mild Cognitive Impairment. Front. Aging Neurosci. 2018, 10, 101. [Google Scholar] [CrossRef] [Green Version]
  51. Güntekin, B.; Başar, E. Review of evoked and event-related delta responses in the human brain. Int. J. Psychophysiol. 2016, 103, 43–52. [Google Scholar] [CrossRef] [Green Version]
  52. Hata, M.; Kazui, H.; Tanaka, T.; Ishii, R.; Canuet, L.; Pascual-Marqui, R.D.; Aoki, Y.; Ikeda, S.; Kanemoto, H.; Yoshiyama, K.; et al. Functional connectivity assessed by resting state EEG correlates with cognitive decline of Alzheimer’s disease–An eLORETA study. Clin. Neurophysiol. 2016, 127, 1269–1278. [Google Scholar] [CrossRef] [Green Version]
  53. Koenig, T.; Prichep, L.; Dierks, T.; Hubl, D.; Wahlund, L.O.; John, E.R.; Jelic, V. Decreased EEG synchronization in Alzheimer’s disease and mild cognitive impairment. Neurobiol. Aging 2005, 26, 165–171. [Google Scholar] [CrossRef] [Green Version]
  54. Reinhart, R.M.G.; Nguyen, J.A. Working memory revived in older adults by synchronizing rhythmic brain circuits. Nat. Neurosci. 2019, 22, 820–827. [Google Scholar] [CrossRef] [PubMed]
  55. Iaccarino, H.F.; Singer, A.C.; Martorell, A.J.; Rudenko, A.; Gao, F.; Gillingham, T.Z.; Mathys, H.; Seo, J.; Kritskiy, O.; Abdurrob, F.; et al. Gamma frequency entrainment attenuates amyloid load and modifies microglia. Nature 2016, 540, 230–235. [Google Scholar] [CrossRef] [Green Version]
  56. Martorell, A.J.; Paulson, A.L.; Suk, H.-J.; Abdurrob, F.; Drummond, G.T.; Guan, W.; Young, J.Z.; Kim, D.N.-W.; Kritskiy, O.; Barker, S.J.; et al. Multi-sensory Gamma Stimulation Ameliorates Alzheimer’s-Associated Pathology and Improves Cognition. Cell 2019, 177, 256–271.e22. [Google Scholar] [CrossRef] [Green Version]
  57. Ashley, R. Do[n’t] Change a Hair for Me: The Art of Jazz Rubato. Music Percept. 2002, 19, 311–332. [Google Scholar] [CrossRef]
  58. Chapin, H.; Jantzen, K.; Scott Kelso, J.A.; Steinberg, F.; Large, E. Dynamic Emotional and Neural Responses to Music Depend on Performance Expression and Listener Experience. PLoS ONE 2010, 5, e13812. [Google Scholar] [CrossRef] [PubMed]
  59. Istók, E.; Friberg, A.; Huotilainen, M.; Tervaniemi, M. Expressive Timing Facilitates the Neural Processing of Phrase Boundaries in Music: Evidence from Event-Related Potentials. PLoS ONE 2013, 8, e55150. [Google Scholar] [CrossRef]
  60. Kim, J.C.; Large, E.W. Signal Processing in Periodically Forced Gradient Frequency Neural Networks. Front. Comput. Neurosci. 2015, 9, 152. [Google Scholar] [CrossRef] [Green Version]
  61. Kim, J.C.; Large, E.W. Mode locking in periodically forced gradient frequency neural networks. Phys. Rev. E 2019, 99, 022421. [Google Scholar] [CrossRef] [Green Version]
  62. Kim, J.C.; Large, E.W. Multifrequency Hebbian plasticity in coupled neural oscillators. Biol. Cybern. 2021, 115, 43–57. [Google Scholar] [CrossRef]
  63. Lambert, A.J.; Weyde, T.; Armstrong, N. Adaptive Frequency neural networks for dynamic pulse and metre perception. In Proceedings of the 17th ISMIR Conference, New York, NY, USA, 7–11 August 2016. [Google Scholar]
  64. Large, E.W.; Almonte, F.V.; Velasco, M.J. A canonical model for gradient frequency neural networks. Phys. Nonlinear Phenom. 2010, 239, 905–911. [Google Scholar] [CrossRef]
  65. Velasco, M.; Large, E. Pulse Detection in Syncopated Rhythms using Neural Oscillators. In Proceedings of the 12th International Society for Music Information Retrieval Conference, Miami, FL, USA, 24–28 October 2011; Volume 1, pp. 3–4. [Google Scholar]
  66. Tichko, P.; Kim, J.C.; Large, E.W. Bouncing the network: A dynamical systems model of auditory–vestibular interactions underlying infants’ perception of musical rhythm. Dev. Sci. 2021, 24, e13103. [Google Scholar] [CrossRef] [PubMed]
  67. Tichko, P.; Large, E.W. Modeling infants’ perceptual narrowing to musical rhythms: Neural oscillation and Hebbian plasticity. Ann. N. Y. Acad. Sci. 2019, 1453, 125–139. [Google Scholar] [CrossRef] [PubMed]
  68. Kaplan, T.; Chew, E. Detecting Low Frequency Oscillations in Cardiovascular Signals Using Gradient Frequency Neural Networks. In Proceedings of the 2019 Computing in Cardiology Conference, Singapore, 30 December 2019. [Google Scholar]
  69. Kim, J.C. A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation. Front. Psychol. 2017, 8, 666. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Vanden Bosch der Nederlanden, C.M.; Joanisse, M.F.; Grahn, J.A. Music as a scaffold for listening to speech: Better neural phase-locking to song than speech. NeuroImage 2020, 214, 116767. [Google Scholar] [CrossRef]
  71. Fiveash, A.; Schön, D.; Canette, L.-H.; Morillon, B.; Bedoin, N.; Tillmann, B. A stimulus-brain coupling analysis of regular and irregular rhythms in adults with dyslexia and controls. Brain Cogn. 2020, 140, 105531. [Google Scholar] [CrossRef] [Green Version]
  72. Mas-Herrero, E.; Marco-Pallares, J.; Lorenzo-Seva, U.; Zatorre, R.J.; Rodriguez-Fornells, A. Individual Differences in Music Reward Experiences. Music Percept. 2013, 31, 118–138. [Google Scholar] [CrossRef] [Green Version]
  73. Müllensiefen, D.; Gingras, B.; Musil, J.; Stewart, L. The Musicality of Non-Musicians: An Index for Assessing Musical Sophistication in the General Population. PLoS ONE 2014, 9, e89642. [Google Scholar] [CrossRef] [Green Version]
  74. Peretz, I.; Champod, A.S.; Hyde, K. Varieties of musical disorders: The Montreal Battery of Evaluation of Amusia. Ann. N. Y. Acad. Sci. 2003, 999, 58–75. [Google Scholar] [CrossRef]
  75. Peirce, J.; Gray, J.R.; Simpson, S.; MacAskill, M.; Höchenberger, R.; Sogo, H.; Kastman, E.; Lindeløv, J.K. PsychoPy2: Experiments in behavior made easy. Behav. Res. Methods 2019, 51, 195–203. [Google Scholar] [CrossRef] [Green Version]
  76. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [Green Version]
  77. Mullen, T.R.; Kothe, C.A.E.; Chi, Y.M.; Ojeda, A.; Kerth, T.; Makeig, S.; Jung, T.-P.; Cauwenberghs, G. Real-time neuroimaging and cognitive monitoring using wearable dry EEG. IEEE Trans. Biomed. Eng. 2015, 62, 2553–2567. [Google Scholar] [CrossRef] [PubMed]
  78. Chang, C.-Y.; Hsu, S.-H.; Pion-Tonachini, L.; Jung, T.-P. Evaluation of Artifact Subspace Reconstruction for Automatic Artifact Components Removal in Multi-Channel EEG Recordings. IEEE Trans. Biomed. Eng. 2020, 67, 1114–1121. [Google Scholar] [CrossRef] [PubMed]
  79. Pion-Tonachini, L.; Kreutz-Delgado, K.; Makeig, S. ICLabel: An automated electroencephalographic independent component classifier, dataset, and website. NeuroImage 2019, 198, 181–197. [Google Scholar] [CrossRef] [Green Version]
  80. Zilany, M.S.A.; Bruce, I.C. Modeling auditory-nerve responses for high sound pressure levels in the normal and impaired auditory periphery. J. Acoust. Soc. Am. 2006, 120, 1446–1466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Bello, J.P.; Duxbury, C.; Davies, M.; Sandler, M. On the Use of Phase and Energy for Musical Onset Detection in the Complex Domain. IEEE Signal Process. Lett. 2004, 11, 553–556. [Google Scholar] [CrossRef]
  82. Cohen, M.X. Analyzing Neural Time Series Data: Theory and Practice; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
  83. Lalor, E.C.; Foxe, J.J. Neural responses to uninterrupted natural speech can be extracted with precise temporal resolution. Eur. J. Neurosci. 2010, 31, 189–193. [Google Scholar] [CrossRef] [PubMed]
  84. Crosse, M.J.; Di Liberto, G.M.; Bednar, A.; Lalor, E.C. The Multivariate Temporal Response Function (mTRF) Toolbox: A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli. Front. Hum. Neurosci. 2016, 10, 604. [Google Scholar] [CrossRef] [Green Version]
  85. Canty, A.; Ripley, B.D. Boot: Bootstrap R (S-Plus) Functions. 2021. Available online: https://cran.r-project.org/web/packages/boot/index.html (accessed on 1 May 2022).
  86. Davison, A.C.; Hinkley, D.V. Bootstrap Methods and Their Applications; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  87. Hall, J.W. Handbook of Auditory Evoked Responses; Allyn and Bacon: Boston, MA, USA, 1992. [Google Scholar]
  88. Riha, C.; Güntensperger, D.; Kleinjung, T.; Meyer, M. Accounting for Heterogeneity: Mixed-Effects Models in Resting-State EEG Data in a Sample of Tinnitus Sufferers. Brain Topogr. 2020, 33, 413–424. [Google Scholar] [CrossRef] [Green Version]
  89. DeBruine, L.M.; Barr, D.J. Understanding mixed effects models through data simulation. Adv. Methods Pract. Psychol. Sci. 2021, 4, 22. [Google Scholar] [CrossRef]
  90. Heckerman, D.; Gurdasani, D.; Kadie, C.; Pomilla, C.; Carstensen, T.; Martin, H.; Ekoru, K.; Nsubuga, R.N.; Ssenyomo, G.; Kamali, A.; et al. Linear mixed model for heritability estimation that explicitly addresses environmental variation. Proc. Natl. Acad. Sci. USA 2016, 113, 7377–7382. [Google Scholar] [CrossRef] [Green Version]
  91. Jaeger, B.C.; Edwards, L.J.; Das, K.; Sen, P.K. An R2 statistic for fixed effects in the generalized linear mixed model. J. Appl. Stat. 2017, 44, 1086–1105. [Google Scholar] [CrossRef]
  92. Lenth, R.V. Emmeans: Estimated Marginal Means, Aka Least-Squares Means. Available online: https://CRAN.R-project.org/package=emmeans (accessed on 1 May 2022).
  93. Cramer, A.O.J.; van Ravenzwaaij, D.; Matzke, D.; Steingroever, H.; Wetzels, R.; Grasman, R.P.P.P.; Waldorp, L.J.; Wagenmakers, E.-J. Hidden multiplicity in exploratory multiway ANOVA: Prevalence and remedies. Psychon. Bull. Rev. 2016, 23, 640–647. [Google Scholar] [CrossRef] [Green Version]
  94. Holm, S. A Simple Sequentially Rejective Multiple Test Procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
  95. Belfi, A.M.; Moreno, G.L.; Gugliano, M.; Neill, C. Musical reward across the lifespan. Aging Ment. Health. 2021, 26, 932–939. [Google Scholar] [CrossRef] [PubMed]
  96. Hove, M.J.; Marie, C.; Bruce, I.C.; Trainor, L.J. Superior time perception for lower musical pitch explains why bass-ranged instruments lay down musical rhythms. Proc. Natl. Acad. Sci. USA 2014, 111, 10383–10388. [Google Scholar] [CrossRef] [Green Version]
  97. Weineck, K.; Wen, O.X.; Henry, M.J. Neural synchronization is strongest to the spectral flux of slow music and depends on familiarity and beat salience. eLife 2022, 11, e75515. [Google Scholar] [CrossRef]
Figure 1. Measuring neural entrainment to musical pulse. Neural entrainment to musical pulse and rhythm was quantified as the phase-locking value (PLV) between EEG channel activity and the envelope of each musical recording. First, raw EEG channel activity was preprocessed, and the amplitude envelope of each musical recording was estimated. A time-frequency decomposition, using the complex wavelet transform, was then conducted to estimate instantaneous phase information in the EEG time-series and musical envelopes. PLVs were computed from the resulting phase time-series. Finally, PLVs underwent pulse normalization, prior to second-level analyses.
Figure 1. Measuring neural entrainment to musical pulse. Neural entrainment to musical pulse and rhythm was quantified as the phase-locking value (PLV) between EEG channel activity and the envelope of each musical recording. First, raw EEG channel activity was preprocessed, and the amplitude envelope of each musical recording was estimated. A time-frequency decomposition, using the complex wavelet transform, was then conducted to estimate instantaneous phase information in the EEG time-series and musical envelopes. PLVs were computed from the resulting phase time-series. Finally, PLVs underwent pulse normalization, prior to second-level analyses.
Brainsci 12 01676 g001
Figure 2. Natural pulse frequencies in participant-selected music. Probability density functions (PDFs) of natural pulse frequencies for younger (YA; blue) and older (OA; grey) adults’ self-selected musical recordings that survived the music-feature analysis. Younger adults (N = 16, 77 musical recordings) and older adults (N = 15, 61 musical recordings) exhibited similar distributions of natural pulse frequencies in their self-selected music, with a prominent mode emerging at ~2 Hz, prior to the pulse normalization of the PLVs. A Kolmogorov–Smirnov test of the pulse-frequency distributions suggested that the distributions of the natural pulse frequencies did not significantly differ between the YA and OA groups.
Figure 2. Natural pulse frequencies in participant-selected music. Probability density functions (PDFs) of natural pulse frequencies for younger (YA; blue) and older (OA; grey) adults’ self-selected musical recordings that survived the music-feature analysis. Younger adults (N = 16, 77 musical recordings) and older adults (N = 15, 61 musical recordings) exhibited similar distributions of natural pulse frequencies in their self-selected music, with a prominent mode emerging at ~2 Hz, prior to the pulse normalization of the PLVs. A Kolmogorov–Smirnov test of the pulse-frequency distributions suggested that the distributions of the natural pulse frequencies did not significantly differ between the YA and OA groups.
Brainsci 12 01676 g002
Figure 3. Neural entrainment to musical pulse and rhythm. (A) Phase-locking values (PLVs) grand-averaged across all EEG channels and each participant’s self-selected musical recordings, following pulse normalization of the PLVs. Both younger adults (blue) and older adults (black) exhibited stronger PLVs at rhythm-related levels (e.g., sub-harmonic, pulse, and harmonic levels, respectively; red dashed lines), manifested as local maxima in the PLV plots, compared to off-rhythm levels (e.g., off-sub-harmonic, off-pulse, off-harmonic, respectively; grey dashed lines), manifested as local minima. Shaded errors bars are bootstrapped 95% confidence intervals. (B) Grand-averaged PLVs at the pulse and off-pulse levels for each younger adult (blue) and each older adult (black). The PLVs were consistently stronger at the pulse level than at the off-pulse level (*** p ≤ 0.001). Grey bars represent the mean PLV for the pulse and off-pulse level, collapsing across younger and older adults (i.e., a main effect of rhythm-level of pulse vs. off-pulse).
Figure 3. Neural entrainment to musical pulse and rhythm. (A) Phase-locking values (PLVs) grand-averaged across all EEG channels and each participant’s self-selected musical recordings, following pulse normalization of the PLVs. Both younger adults (blue) and older adults (black) exhibited stronger PLVs at rhythm-related levels (e.g., sub-harmonic, pulse, and harmonic levels, respectively; red dashed lines), manifested as local maxima in the PLV plots, compared to off-rhythm levels (e.g., off-sub-harmonic, off-pulse, off-harmonic, respectively; grey dashed lines), manifested as local minima. Shaded errors bars are bootstrapped 95% confidence intervals. (B) Grand-averaged PLVs at the pulse and off-pulse levels for each younger adult (blue) and each older adult (black). The PLVs were consistently stronger at the pulse level than at the off-pulse level (*** p ≤ 0.001). Grey bars represent the mean PLV for the pulse and off-pulse level, collapsing across younger and older adults (i.e., a main effect of rhythm-level of pulse vs. off-pulse).
Brainsci 12 01676 g003
Figure 4. Neural entrainment interacts with electrode cluster and age group. (A) Topographies of the phase-locking values (PLVs) for the pulse and off-pulse levels. Younger adults (top row) exhibited higher PLVs at the pulse level in a cluster of parietal–occipital electrodes (e.g., electrode cluster O, red dots denote which channels constitute that cluster). Older adults (bottom row) exhibited a more uniform topography of PLVs at the pulse level. (B) Mean PLVs representing a three-way interaction between rhythm level (pulse, off-pulse), electrode cluster (nine clusters), and age group (younger adults, older adults). Error bars are standard error of the mean (SE).
Figure 4. Neural entrainment interacts with electrode cluster and age group. (A) Topographies of the phase-locking values (PLVs) for the pulse and off-pulse levels. Younger adults (top row) exhibited higher PLVs at the pulse level in a cluster of parietal–occipital electrodes (e.g., electrode cluster O, red dots denote which channels constitute that cluster). Older adults (bottom row) exhibited a more uniform topography of PLVs at the pulse level. (B) Mean PLVs representing a three-way interaction between rhythm level (pulse, off-pulse), electrode cluster (nine clusters), and age group (younger adults, older adults). Error bars are standard error of the mean (SE).
Brainsci 12 01676 g004
Figure 5. Neural entrainment to sub-harmonic and harmonic levels. (A) Difference topographies showing younger adults (YA) > older adults (OA) of phase-locking values (PLV) for the sub-harmonic levels. (B) Difference topographies showing younger adults (YA) > older adults (OA) of phase-locking values (PLV) for the pulse, and harmonic levels. Red dots on the pulse-level topography represent electrode cluster O, while red dots on the harmonic-level topography represent electrode cluster RP. (C) Mean PLVs representing a two-way interaction between rhythm level (sub-harmonic, off-sub-harmonic) and electrode cluster (nine clusters). Error bars are standard error of the mean (SE). (D) Mean PLVs representing a three-way interaction between rhythm level (harmonic, off-harmonic), electrode cluster (nine clusters), and age group (younger adults, older adults). Error bars are standard error of the mean (SE).
Figure 5. Neural entrainment to sub-harmonic and harmonic levels. (A) Difference topographies showing younger adults (YA) > older adults (OA) of phase-locking values (PLV) for the sub-harmonic levels. (B) Difference topographies showing younger adults (YA) > older adults (OA) of phase-locking values (PLV) for the pulse, and harmonic levels. Red dots on the pulse-level topography represent electrode cluster O, while red dots on the harmonic-level topography represent electrode cluster RP. (C) Mean PLVs representing a two-way interaction between rhythm level (sub-harmonic, off-sub-harmonic) and electrode cluster (nine clusters). Error bars are standard error of the mean (SE). (D) Mean PLVs representing a three-way interaction between rhythm level (harmonic, off-harmonic), electrode cluster (nine clusters), and age group (younger adults, older adults). Error bars are standard error of the mean (SE).
Brainsci 12 01676 g005
Table 1. Nine electrode clusters, defined from previous auditory-related research, were used to investigate channel-related differences in neural entrainment to rhythm.
Table 1. Nine electrode clusters, defined from previous auditory-related research, were used to investigate channel-related differences in neural entrainment to rhythm.
Electrode ClusterElectrodes
Left frontal (LF)AF7, AF3, F7, F5, F3, F1
Right frontal (RF)AF4, AF8, F2, F4, F6, F8
Left central (LC)FT7, FC5, FC3, T7, C5, C3
Midline central (MC)FC1, FCz, FC2, C1, Cz, C2
Right central (RC)FC4, FC6, FC8, C4, C6, T8
Left parietal (LP)TP7, CP5, CP3, P7, P5, P3
Midline parietal (MP)CP1, CPZ, CP2, P1, Pz, P2
Right parietal (RP)CP4, CP6, CP8, P4, P6, P8
Occipital (O)PO3, POZ, PO4, O1, OZ, O2
Table 2. Younger and older adults’ mean values and standard deviations (SD) for age (years), the total score of the Barcelona Music Reward Questionnaire (BMRQ), the total score of the Goldsmiths Musical Sophistication Index (Gold-MSI), and scores on the melodic contour perception task from the Montreal Battery of Evaluation of Amusia (MBEA).
Table 2. Younger and older adults’ mean values and standard deviations (SD) for age (years), the total score of the Barcelona Music Reward Questionnaire (BMRQ), the total score of the Goldsmiths Musical Sophistication Index (Gold-MSI), and scores on the melodic contour perception task from the Montreal Battery of Evaluation of Amusia (MBEA).
All ParticipantsYounger AdultsOlder Adults
AgeMean45.3819.8170.94
SD26.661.608.48
BMRQMean75.8880.7571.00
SD13.8510.7515.17
Gold-MSIMean184.19203.69164.69
SD46.0636.7947.12
MBEAMean23.0923.9422.25
SD3.743.733.68
Table 3. (a) Pulse level—ANOVA Table. For the pulse level, a linear-mixed effect model returned a significant main effect for rhythmic level (e.g., pulse level, off-pulse level) on phase-locking values (PLV) to music. (b) Pulse Level—ANOVA Table. For the pulse level, a linear-mixed effect model returned a significant three-way interaction between electrode group*rhythmic level*age group fixed effects on phase-locking values (PLV) to music.
Table 3. (a) Pulse level—ANOVA Table. For the pulse level, a linear-mixed effect model returned a significant main effect for rhythmic level (e.g., pulse level, off-pulse level) on phase-locking values (PLV) to music. (b) Pulse Level—ANOVA Table. For the pulse level, a linear-mixed effect model returned a significant three-way interaction between electrode group*rhythmic level*age group fixed effects on phase-locking values (PLV) to music.
(a)
Pulse Level: Mixed Model ANOVA Table (Type-III Tests, S-method)
EffectDFFp-ValueSemi-Partial R2
Rhythmic_Level1, 29.0035.17 ***<0.0010.144
Age_Group1, 29.000.130.7170.005
Rhythmic_Level:Age_Group1, 29.000.21110.6490.002
(b)
Pulse Level: Mixed Model ANOVA Table (Type-III Tests, S-method)
EffectDFFp-ValueHolm p-Value
Electrode_Group8, 57.132.580.0180.070
Age_Group1, 29.000.400.5321.00
Rhythmic_Level1, 29.008.97 *0.0060.029
Electrode_Group:Age_Group8, 57.130.490.8601.00
Electrode_Group:Rhythmic_Level8, 3006.575.53 ***<0.001<0.001
Age_Group:Rhythmic_Level1, 29.000.300.5901.00
Electrode_Group:Age_Group:Rhythmic_Level8, 3006.577.53 ***<0.001<0.001
* p ≤ 0.05, *** p ≤ 0.001
Table 4. Sub-harmonic level—ANOVA Table. For the subharmonic level, a linear-mixed effect model returned a significant two-way interaction between the electrode group*rhythmic level fixed effects on phase-locking values (PLV) to music.
Table 4. Sub-harmonic level—ANOVA Table. For the subharmonic level, a linear-mixed effect model returned a significant two-way interaction between the electrode group*rhythmic level fixed effects on phase-locking values (PLV) to music.
Sub-Harmonic Level: Mixed Model ANOVA Table (Type-III Tests, S-method)
EffectDFFp-ValueHolm p-Value
Electrode_Group 8, 62.243.22 *0.0040.020
Age_Group1, 29.000.160.6931.00
Rhythmic_Level 1, 29.0010.33 *0.0030.019
Electrode_Group:Age_Group 8, 62.240.600.7731.00
Electrode_Group:Rhythmic_Level8, 3013.388.23 ***<0.001<0.001
Age_Group:Rhythmic_Level 1, 29.000.010.9341.00
Electrode_Group:Age_Group:Rhythmic_Level8, 3013.381.740.0850.341
* p ≤ 0.05, *** p ≤ 0.001.
Table 5. Harmonic Level—ANOVA Table. For the harmonic level, a linear-mixed effect model returned a significant three-way interaction between electrode group*rhythmic level*age group fixed effects on phase-locking values (PLV) to music.
Table 5. Harmonic Level—ANOVA Table. For the harmonic level, a linear-mixed effect model returned a significant three-way interaction between electrode group*rhythmic level*age group fixed effects on phase-locking values (PLV) to music.
Harmonic Level: Mixed Model ANOVA Table (Type-III Tests, S-method)
EffectDFFp-ValueHolm p-Value
Electrode_Group 8, 69.390.290.9661.00
Age_Group1, 29.000.010.9291.00
Rhythmic_Level 1, 29.0011.33 *0.0020.01
Electrode_Group:Age_Group 8, 69.391.020.4321.00
Electrode_Group:Rhythmic_Level8, 3014.817.43 ***< 0.001<0.001
Age_Group:Rhythmic_Level 1, 29.000.680.4151.00
Electrode_Group:Age_Group:Rhythmic_Level8, 3014.814.74 ***< 0.001<0.001
* p ≤ 0.05, *** p ≤ 0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tichko, P.; Page, N.; Kim, J.C.; Large, E.W.; Loui, P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci. 2022, 12, 1676. https://doi.org/10.3390/brainsci12121676

AMA Style

Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sciences. 2022; 12(12):1676. https://doi.org/10.3390/brainsci12121676

Chicago/Turabian Style

Tichko, Parker, Nicole Page, Ji Chul Kim, Edward W. Large, and Psyche Loui. 2022. "Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions" Brain Sciences 12, no. 12: 1676. https://doi.org/10.3390/brainsci12121676

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop