Next Article in Journal
HoneyFactory: Container-Based Comprehensive Cyber Deception Honeynet Architecture
Next Article in Special Issue
Audio Recognition of the Percussion Sounds Generated by a 3D Auto-Drum Machine System via Machine Learning
Previous Article in Journal
MalOSDF: An Opcode Slice-Based Malware Detection Framework Using Active and Ensemble Learning
Previous Article in Special Issue
Applying the Lombard Effect to Speech-in-Noise Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Temporal Lobe Epilepsy during Music Large-Scale Form Perception Using the Impulse Pattern Formulation (IPF) Brain Model

Institute of Musicology, University of Hamburg, Neue Rabenstr. 13, 20354 Hamburg, Germany
Electronics 2024, 13(2), 362; https://doi.org/10.3390/electronics13020362
Submission received: 10 October 2023 / Revised: 8 January 2024 / Accepted: 10 January 2024 / Published: 15 January 2024
(This article belongs to the Special Issue Recent Advances in Audio, Speech and Music Processing and Analysis)

Abstract

:
Musical large-scale form is investigated using an electronic dance music piece fed into a Finite-Difference Time-Domain physical model of the cochlea, which again is input into an Impulse Pattern Formulation (IPF) Brain model. In previous studies, experimental EEG data showed an enhanced correlation between brain synchronization and the musical piece’s amplitude and fractal correlation dimension, representing musical tension and expectancy time points within the large-scale form of musical pieces. This is also in good agreement with a FitzHugh–Nagumo oscillator model.However, this model cannot display temporal developments in large-scale forms. The IPF Brain model shows a high correlation between cochlea input and brain synchronization at the gamma band range around 50 Hz, and also a strong negative correlation with low frequencies, associated with musical rhythm, during time frames with low cochlea input amplitudes. Such a high synchronization corresponds to temporal lobe epilepsy, often associated with creativity or spirituality. Therefore, the IPF Brain model results suggest that these conscious states occur at times of low external input at low frequencies, where isochronous musical rhythms are present.

1. Introduction

Musical large-scale forms; the temporal organization of a musical piece over its whole length, e.g., the song’s verse and refrain structure; electronic dance music tension build-up and relaxation; and sonata song structures are poorly researched in terms of brain dynamics [1,2]. It is widely accepted that music is organized according to Gestalt principles [3,4], often with hierarchical structures, psychologically [5,6], or in terms of music theory [7], and is subject to long-term memory [8].
The processing of sound in the brain has historically been understood as a bottom-up process, starting from the transition of sound in the cochlea into neural spikes [9], further processed in the auditory pathway [10], to coincidence detection [9], tonotopy, interaural level and time difference detection, and pitch perception [11,12], among others. Still, even within the auditory cortex, multiple bottom-up and top-down connections are present [13], and so viewing the auditory pathway as a complex, self-organizing neural network seems more appropriate.
Such a self-organizing view is standard in terms of cortical processing, and existing neural models vary in terms of complexity and scaling [14]. Only a few brain models try to understand the brain with simple principles like the free-energy principle [15], assuming adaptation of the brain to external surprises; the global workspace view [16], assuming synchronization and de-synchronization of brain parts over time; or the synergetic approach of Gestalt perception [17].
Machine learning models also view conscious content as a process of complex, often nonlinear interactions between single neurons, also leading to heuristic and coherent Gestalt-like connectionist [18,19] and music models [20,21] when analyzing large musical databases such as Computational Phonogram Archiving [22] for streaming platforms or archives in general [23].
The self-organizing concept is also reflected in the idea of fifty-millisecond intervals of organized neural spatiotemporal patterns followed by short, chaotic disturbances associated with olfactory [24] or auditory [25,26] conscious content. Enlarging the picture to interactions between subjects results in the idea of a self-organizing society of brains [27] or the inclusion of cultural artifacts and nature in Physical Culture Theory [28].
Incorporating brains with cultural objects requires a general framework which was proposed as the Impulse Pattern Formulation (IPF), first developed for musical instruments [29]. As the only force we physically experience—next to gravity—is electricity, it is straightforward to formulate acoustic as well as electric nervous spike impulses as being of the same nature. The IPF then takes a viewpoint of a neuron or musical instrument part, from which an impulse is sent out to several other neurons, musical instrument parts, or any kind of object. This impulse is processed, damped, and returned back to the viewpoint object [12,29,30]. Such an iterative, nonlinear dynamical process is scale-free, capable of modeling sudden phase changes, and includes convergent, bifurcating, complex, and chaotic states. Although often modeling a system with only very few nodal points, the IPF has already been shown to be of high precision in musical instrument applications [30] as well as in rhythm perception and production [31].
The model was also formulated as a brain model [28] with neural adaptation and plasticity, non-trivially finding a concentration of inhibitory vs. excitatory neurons of 10–20%, corresponding to the real relation in the brain [32], as a maximum of possible system convergence. The model also finds a maximum reflection strength around the usual time for event-related potentials, as well as a decay in memory in the system corresponding to short-term memory. These general findings strongly point to the validity of the model.
Investigating the brain dynamics using large-scale musical forms has already shown that musical tension can be represented in the increasing synchronization of brain parts [1,2,33]. Such findings correspond to synchronization caused by expectancy [34] of a climax. This represents the development of tension over the large-range form of a musical piece, but it also might correspond to other semantic content like anger, anxiety, drama, relaxation, spirituality, or meditation. Such a relation might be found when referring to the lyrics of a musical song, discussed in the conclusion section below with respect to the piece investigated. Nevertheless, in EEG measurements, the main finding is that brain synchronization is strongest around 50 Hz, and thus in the gamma band of brain dynamics, although of course other brain rhythms exist [32].
An interesting aspect of the IPF Brain model is the presence of convergence, i.e., the complete synchronization of neurons in the model. This corresponds to epilepsy [35], which is not the usual state and is a very dangerous brain state. Still, partial synchronization in the brain is well known in chimera states [36]. Also, in terms of the auditory cortex, temporal lobe epilepsy has often been reported [37], and is associated with spiritual or meditative states of mind. Such states are naturally associated with longer time spans than pitch or rhythm and are subject to large-scale forms.
The present paper applies the IPF Brain model as suggested previously [28] to the case of an electronic dance music (EDM) piece One Mic by the rapper NAS, which has been investigated before in EEG experiments [1,2] by using a FitzHugh–Nagumo dynamical brain model [33]. Strong correlations between the large-scale form of the model, associated with the sound amplitude and fractal correlation dimension as a measure of musical density [29], have been found experimentally and in the model, with a maximum synchronization in the gamma band of brain dynamics of around 50 Hz in both cases. Still, due to the FitzHugh–Nagumo model not dissolving the temporal development of brain dynamics, a deeper understanding of the reason for this frequency-dependent synchronization requires a model capable of such a temporal resolution, which is represented by the IPF Brain model in this paper.
The paper first introduces the method used. As in the previous paper, the input to the Brain IPF is the output of a Finite-Difference Time-Domain (FDTM) physical model of the cochlea, for which the musical sound is used as an input. The output of the cochlear model is fed into the input of the Brain IPF. To estimate the influence of the amount of neurons used in the model, N = 50, 100, and 200 neurons are taken, showing the independence of the model with respect to this parameter. In the Section 2, the post-processing of these modeled parameters is discussed, especially the Kuramoto order parameter to measure the synchronization of neurons and the correlation of this parameter with the cochlea input. The Section 3 then mainly concentrates on this correlation and discusses the reason for the model behavior. In the Section 4, an overview of different conscious states with respect to the synchronization frequency is given.

2. Methods

2.1. Cochlea Model

The present model assumes the differential equation of a membrane
K ( x ) μ ( x ) 2 u x 2 d ( x ) u t = 2 u t 2 + f ( t ) ,
with basilar membrane displacement u along a one-dimensional axis x; basilar membrane stiffness K ( x ) = 2 × 10 9 e 3.4 x dyn / cm 3 , changing along the x axis; linear density μ ( x ) = m / A ( x ) , with the mass over area again changing along the basilar membrane; and A ( x ) = 0.1 cm × ( 0.1 cm + 0.02 cm × x / l ) with basilar membrane length l = 3.5 cm , taking the slight widening of the basilar membrane over its length into account.
A 1D model is sufficient to model a basilar membrane, as shown in [38], which compared 1D and 2D models based on the anisotropy of the basilar membrane as discussed by [39]. Here, it was found numerically and based on experimental data that the inclusion of a second dimension contributes less than 1% of the results already obtained by a 1D model. This is reasonable, as the basilar membrane has dimensions of about 3.5 cm in length but only about 1 mm in width and is therefore more a rod than a membrane. Also, the Young’s modulus in the y-direction is only about 10% of that in x-direction [38] and does therefore not add considerably to the overall basilar membrane movement.
To confirm this finding, a two-dimensional model was built:
K x ( x ) μ ( x , y ) 2 u x 2 + K y ( x ) μ ( x , y ) 2 u y 2 d ( x , y ) u t = 2 u t 2 + f ( t ) ,
with a Young’s modulus in the x-direction of K x ( x ) = 2 × 10 9 e 3.4 x dyn / cm 3 as above and in the y-direction of K y ( x ) = 0.1 × K x ( x ) according to the literature [38]. The linear density μ ( x , y ) = m u ( x ) of the 1D model holds also for damping d ( x , y ) = d ( x ) . The model consists again of 48 modal points in the x-direction and 6 nodal points in the y-direction. The boundary conditions were again simply supported. Still, the results did not differ considerably [9]. For the sake of simplicity and also taking the computational cost into consideration, the 1D model was then used further.
The electronic dance music piece One Mic by the rapper NAS was used as a sound input to the cochlea model. The FDTM sample rate was 192 kHz to ensure model stability. The output is a set of spike time points S B i at each of the 24 Bark bands B with i = 1, 2, 3 … N B , where N B is the maximum number of spikes at Bark band B. Each S B i has an associated amplitude of A B i . Although single spikes have a more or less uniform amplitude, the cochlea nerve fiber output of the model sums many spikes at each position S B i . Therefore, A B i represents the amount of spikes or the output strength.
As input to the IPF Brain model, all outputs, as accumulated with respect to 20 ms time intervals, were used, corresponding to f m a x = 500 Hz, which is then the time constant of the IPF Brain model discussed below. Therefore, the frequency range of interest in the brain is well represented up to about 200 Hz. The resulting cochlea output time series is then denoted as
C t = B = 1 24 i = 1 f m a x × T m a x G ( A B i ) H ( S B i ) ,
where T m a x = 266 s—the length of the musical piece—and the functions G and H detect if the respective spike is within the respective time window and gives G ( A B i ) = A B i and H ( S B i ) = 1 if so. For further correlation with the IPF Brain model output, C T is calculated as the time series averaged over time windows of 1 s. The unit-less mean amplitude of the C t ¯ = 0.000118.

2.2. Brain Impulse Pattern Formulation (IPF)

The brain is modeled using N = 50 neurons. Each neuron is a reflection point, returning impulses from a viewpoint neuron. The system state of the viewpoint neuron is g, which represents a time period and an amplitude strength. Each reflection neuron i has a damping α i . The IPF is then
g t = 1 N i = 1 N P i ln α t i g t i + w 2 g t I .
Here, the viewpoint neuron is i = 1; the reflections come from neurons i = 2, 3, 4,… N. P i are the polarizations of the neuron, where P i = 1 is an excitatory neuron and P i = −1 is an inhibitory neuron. Throughout the paper, a relation of 10% of inhibitory neurons is used. Note that the sum of all reflections is normalized using the amount of neurons N. The model is discrete with time steps t = 0, 1, 2, 3… Therefore, the earlier states of the viewpoint neuron, which this neuron has sent out to the other neurons, return after a delay in a damped and polarized form.
For a deeper discussion of the model, see [28].

2.3. Plasticity Model

The plasticity of each neuron is calculated for each time step t. Plasticity refers to a change in the damping parameter α i , where each time step t then might have a different damping α t i . Note that in the IPF reasoning, the damping is originally 1 / α , which, for the sake of convenience, is skipped, using α instead.
For each time step, the new damping is calculated as
α t i = α t 1 i + w 1 ln ( 1 + ( g t i g t ) ) .
If the reflection point neuron t-i, g t i , has the same value as the viewpoint neuron value, g t , the logarithm becomes zero, and no change in damping α t i happens. If the reflection point neuron t i has a larger value than the viewpoint neuron, the logarithm becomes larger than zero and α i increases. Otherwise, the logarithm assures a negative influence, and α i decreases. The plasticity process is generally modeled using a constant w 1 . Therefore, plasticity can be switched off in the model by using w 1 = 0 . To examine different model behaviors, w 1 will systematically be altered, as shown below. Again, the absolute value of α is used, not allowing negative or complex values. This, again, does not change the model behavior due to the logarithms used. Still, positive values are more convenient. Indeed, negative arguments of the logarithm in the simulations shown below appear very rarely and are additionally suppressed by using the absolute value.

2.4. External Musical Instrument Input IPF

Like in a previous study [33], the electronic dance music piece One Mic by the rapper NAS has been used.

2.5. Detection of System Behavior

Each neuron has its own time series
g t i = P i ln α t i g t i
when reflecting back to the central neuron, as can be seen in Equation (4), where g i is the ith neuron at time point t with a certain α t i at that time point. These time series are Fourier-analyzed with time windows of one second, resulting in F T i ( f ) , where 0 T 266 s, i.e., the length of the musical piece. All IPF simulations in this paper were performed with a maximum of f m a x = 500 Hz; therefore, 1 ≤ f ≤ 500 Hz. The time series of the central neuron will be labeled below as simply g.
Synchronization was measured, as in a previous paper, using the Kuramoto order parameter
K T ( f ) = 1 N i = 1 N e ı θ T i ( f ) ,
where θ T i ( f ) is the phase of the ith neuron at time interval T of frequency f taken from F T i ( f ) . N is the amount of neural reflection points; in this study, N = 50, N = 100, and N = 200. The Kuramoto order parameter is used as it is the most widely accepted synchronization measurement parameter. It holds that 0 K T ( f ) 1 with K T ( f ) = 0 in the case of no synchronization and K T ( f ) = 1 in the case of maximum synchronization.
The synchronization order parameter K T ( f ) is time-dependent. To estimate the overall synchronization strength from K T ( f ) , a time-averaged mean K ( f ) is calculated.
Also, the correlation of K T ( f ) with the cochlea input time series C T is performed, leading to K C ( f ) , an estimation of the frequency dependency of the correlation strength of the synchronization with the time series.
The Finite-Difference Time-Domain model of the cochlea was implemented using C++, C#, and CUDA code in Visual Studio software 2012 and 2017. CUDA code implements the model on an NVIDIA Graphic Processing Unit in parallel with calculation of the model nodal points. The IPF model was run and the results analysis was performed using Mathematica 12 software.

3. Results

To estimate the influence of the amount of neurons used, an analysis was performed for N = 50, as in a previous study, as well as N = 100 and N = 200.

3.1. System Behavior

At first, the system parameter g is expected to follow the cochlea input systematically. Figure 1 shows the cochlea input C T at the bottom (blue) and the system parameter g of the central neuron (yellow) at the top. The simulation was performed for N = 50, N = 100, and N = 200 neural reflection points, and the figure shows the N=100 case. Visually, the time series correspond well. The correlations are 0.37 for N = 50, 0.38 for N = 100, and 0.40 for N = 200. This is the first example of a high independence of the results with respect to the amount N of neural reflection points.

3.2. Gamma Band Synchronization Strength

The IPF Brain model is expected to reproduce experimental data. It was found that the strongest correlation between brain synchronization and musical sound input is in the gamma band around 50–80 Hz. Below and above the gamma band, synchronization is still there, but decreases considerably.
Figure 2 shows K C ( f ) , the correlation of the Kuramoto order parameter with cochlea input C T for frequencies up to 200 Hz for the three reflection points N = 50, N = 100, and N = 200. As expected, there is a peak in K C ( f ) around 50 Hz, with decreasing correlation both above 50 Hz, up to about 90–100 Hz, and below 50 Hz. This is consistent over N = 50, 100, and 200, again pointing to the independence of the choice of N. Still, the correlation for N = 50 is slightly lower than for N = 100 and N = 200, a tendency already found above.
As an example of the correspondence between K C ( f ) and C T in Figure 3, the time series of C T and K C ( 50 ) , so at 50 Hz for the N = 100 case, is shown. Clearly, synchronization follows the amplitude of the cochlea input, the musical piece, nicely. The synchronization is not as smooth as found experimentally; however, the model is only about listening to this musical piece while the EEG experimental data measure the whole brain performing many other tasks at the same time.
Correlations decrease up to 90–100 Hz and increase again at higher frequencies. Oscillations higher than about 100 Hz are not considered within ‘classical’ bands like delta (0.5–3.5 Hz), theta (4–7 Hz), alpha (8–12 Hz), beta (13–30 Hz), and the gamma band, which is found at frequencies above 30 Hz [40], or split into the lower gamma range (30–60 Hz) or higher gamma range (60–120 Hz) [41]. Frequencies above 120 Hz are referred to as fast oscillations [32] and are only briefly associated with conscious content.
Looking for an explanation of the gamma band strength, we could expect some connection between the frequency dependency of K C ( f ) and the cochlea input C t or the system parameter g. However, connections in terms of their spectral amplitudes are not straightforward. In Figure 4, both spectra are shown. Both spectra are very similar, pointing to a response of the Brain IPF to its input, as expected. Indeed, at around 50 Hz, a peak is present in both time series, corresponding to the 50 Hz peak of K C ( f ) . Still, the 50 Hz peaks in C t and g are much sharper. Also, C t and g spectra have a wider peak around 90–100 Hz, while there is a gap in the correlation with Kuramoto synchronization in this region.
Furthermore, both the spectra of C t and g have an enhanced energy in the low frequency range, pointing to the rhythm content of the musical piece. The piece is 92 BPM, corresponding to 1.53 Hz. During the high-amplitude sections of the piece, semiquaver notes are played, resulting in 6.1 Hz; during the low-amplitude sections, the piece uses mainly quaver notes at 3.1 Hz. Thus, much energy is expected below about 6 Hz, which is the case.
Again, connections to K C ( f ) are not straightforward, as below about 10 Hz, K C ( f ) is mainly negative. This is not found experimentally and is surprising at first. Still, when examining an example of N = 100 at 2 Hz, as shown in Figure 5, the reason becomes clear. During times of low cochlea input, amplitude synchronization is very strong, while it is lower when the cochlea input is stronger. In a previous study on the principle behavior of the IPF Brain model [28], it appeared that at a relation of 10% of inhibitory vs. excitatory neurons in the brain, an optimum in terms of convergence or stability of the system is reached. Therefore, we expect the system to converge when a low external input is presented, which is the case here. This leads to a systematic increase in synchronization at times of low external input and therefore to a negative correlation.
For comparison, an IPF Brain model fed with white noise was studied to see if the low-frequency behavior might be a systematic bias of the model. However, over the whole frequency range, low correlations around zero were found, with slightly positive and negative values and no frequency dependence, as expected.
Such convergence would correspond to epilepsy, as all neurons are synchronized. Indeed, epilepsy is reported in the auditory cortex, associated with creativity or spirituality [37]. Such non-everyday experiences are known to occur as a trance, often associated with either long periods of isochronous rhythm or during silence. Both are present in the musical piece during times of low cochlea input. Of course, as discussed above, the model is only about listening to a musical piece without performing other tasks like a real brain does. Still, temporal lobe epilepsy is only present at the auditory cortex, and therefore, the negative correlations might model such a case.
Indeed, in the EEG data, there is enhanced synchronization at such low frequencies, which is again not found in the FitzHugh–Nagumo model [33]. This synchronization is positive and not negative. This might be due to the EEG data being taken from all regions of the human skull, while the IPF Brain model is not specifying brain regions and therefore is also suitable to represent the auditory cortex only. The enhanced synchronization at lower frequencies might also be caused by motor action taken in the brain at low frequencies in the body movement dancing range.

3.3. Scaling Law

EEG data of the brain show a scaling law in the brain spectrum. Plotting this spectrum in a log–log plot exhibits a constant slope over a wide range of frequencies [24]. This corresponds to a so-called 1/f-noise, as the spectrum follows a 1 / f α rule, where a fractal dimension of α ∼2 is found. The scaling law points to a close and systematic connection between the different frequency regions of brain activity and the self-organizing nature of the brain.
The IPF Brain model also shows such a scaling law, as shown in Figure 6. Experimental EEG data also show a plateau or even a positive slope up to about 5 Hz, consistent with the IPF Brain findings. The rippling of the spectrum indicates enhanced activity in certain regions like with the gamma band found already above. The only aspect not corresponding to EEG data is the fractal dimension of the IPF at ∼0.43, rather than the expected ∼2. The reason here is unclear and will be analyzed in future.

4. Conclusions and Discussion

The association of temporal lobe epilepsy with spiritual or meditative experiences corresponds well with the intention of the musical piece One Mic used in this investigation. The lyrics report about the hard struggle in criminal gangs with police interaction and shootings. Several references are made to spiritual, especially Christian, symbols and similar ways of suffering. Such lyrics are rapped during the large amplitude time frames of the piece, i.e., the verses, where also a police siren can be heard. These sections are followed by the low-amplitude parts, where the refrain All I need is one mic is repeated, pointing to music and lyric production as an alternative or a weapon against such a hard struggle. These sections contrast the verses, as they are presented in a contemplative or meditative way.
This compositional tool of presenting a meditative alternative by reducing the volume, reducing the beat from semi-quavers to quavers, and omitting most of the sound effects and keyboard pads of the verse is shown in this paper to produce strongly enhanced synchrony and convergence of the neural network. This synchrony is found in the low frequency range, where the musical rhythm is represented.
Such musical structures are very simple and are present in many musical pieces, where regions of low volume and isochronous rhythm are present. Further investigations are needed to model and measure the neural reaction to such musical content in more detail. Still, due to the simplicity of this compositional tool, one can expect composers and musicians to use it in many musical scenarios.
In a previous similar EEG study of a musical piece, Classical Symphony by Shemian, increased brain synchronization was found towards an expectancy point, after which synchronization decreased again [1]. This is a typical electronic dance music piece in that a tension is built up by compositional tools like increased amplitude and event density, only to climax at a point where the dense structure ends and a four-to-the-floor bass drum starts. Again, this is a typical compositional tool in electronic dance music, a tension build-up and decay repeated dozens of times during a song. This compositional tool is clearly present again in the EEG data here in its large-scale musical form.
In this study, brain synchronization followed the reasoning of a coincidence detection mechanism of cortical oscillators, modeling neural activity in the striatum [34]. After the start of neural oscillation, increased synchronization peaks occur at the point a subject expects an event to happen, like waiting at a traffic light to turn green. The oscillation is then expected to include motor regions, making us nervously shake towards the expected time point.
This might be considered another compositional tool to make people dance. A tension build up, leading to a neural oscillation and including the motor region, enhances the will of subjects to move and, in the case of music, to dance. Although there is strong evidence that this is the case, there is no final proof, as measurements in the motor region are still missing in the musical case.
This line of reasoning seems fundamentally different than that followed in the present paper of neural synchronization leading to a meditative mood. Still, both cases are confirmed experimentally, as is the case of temporal lobe epilepsy [37]. The difference might indeed be found in the different frequency ranges, where increased synchronization at higher frequencies, around 50 Hz, might cause the perception of increased tension and synchronization at low frequencies, contrary to that of a meditative state. Taking into account that synchronization and de-synchronization are fundamental activities in the brain, and brain activity represents all possible states of mind, perception, and consciousness, such a differentiation seems plausible.

Funding

This research received no external funding.

Institutional Review Board Statement

The study does not include ethical issues.

Data Availability Statement

The data used are that of the musical piece ’One Mic’ by NAS, which can be found on music streaming platforms.

Acknowledgments

I thank Lenz Hartmann for contributing the EEG measurements that were published previously and again used in this study.

Conflicts of Interest

There are no conflicts of interest.

References

  1. Hartmann, L.; Bader, R. Neural Synchroniztaion of Music Large-Scale Form. arXiv 2020, arXiv:2005.06938. [Google Scholar]
  2. Hartmann, L. Neuronal synchronization of musical large-scale form: An EEG-study. Proc. Mtgs. Acoust. 2014, 22, 035001. [Google Scholar] [CrossRef]
  3. Bregman, A.S. Auditory Scene Analysis: The Perceptual Organization of Sound; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
  4. Leman, M.; Carreras, F. Schema and Gestalt: Testing the hypothesis of Psychoneural Isomorphism by Computer Simulation. In Music, Gestalt, and Computing. Studies in Cognitive and Systematic Musicology; Leman, M., Ed.; Springer: Berlin/Heidelberg, Germany, 1997; pp. 144–168. [Google Scholar]
  5. Deutsch, D. The Psychology of Music, 3rd ed.; Academic Press series in cognition and perception; Academic: Oxford, UK, 2013. [Google Scholar]
  6. Deliège, I.; Mélen, M. Cue abstraction in the representation musical form. In Perception and Cognition of Music; Deliège, I., Sloboda, J.A., Eds.; Psychology Press: Hove, UK, 2014; pp. 387–412. [Google Scholar]
  7. Lerdahl, F.; Jackendoff, R. A Generative Theory of Tonal Music, 4th ed.; The MIT Press Series on Cognitive Theory and Mental Representation; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
  8. Cuddy, L.L. Long-Term Memory for Music. In Springer Handbook of Systematic Musicology; Bader, R., Ed.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 453–459. [Google Scholar]
  9. Bader, R. Cochlea spike synchronization and coincidence detection model. Chaos 2018, 23105, 1–10. [Google Scholar]
  10. Pressnitzer, D.; de Cheveigne, A.; McAdams, S.; Collet, L. (Eds.) Auditory Signal Processing: Physiology, Psychoacoustics, and Models; Springer: New York, NY, USA, 2004. [Google Scholar]
  11. Cariani, P. Temporal Codes, Timing Nets, and Music Perception. J. New Music. Res. 2001, 30, 107–135. [Google Scholar] [CrossRef]
  12. Bader, R. How Music Works. A Physical Culture Theory; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  13. Hawkins, L.H.; McMullen, T.H.; Popper, A.N.; Fay, R. (Eds.) Auditory Computation; Springer Handook of Auditory Research; Springer: New York, NY, USA, 1996. [Google Scholar]
  14. Kacprzyk, J.; Pedrycz, W. (Eds.) Springer Handbook of Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  15. Friston, K.J.; Friston, D.A. A Free Energy Formulation of Music Performance and Perception. Helmholtz Revisited. In Sound—Perception—Performance; Bader, R., Ed.; Springer Series: Current Research in Systematic Musicology; Springer: Berlin/Heidelberg, Germany, 2013; Volume 1, pp. 43–70. [Google Scholar]
  16. Baars, B.J.; Franklin, S.; Ramsoy, T.Z. Global workspace dynamics: Cortical binding and propagation enable conscious contents. Front. Psychol. 2013, 4, 200. [Google Scholar] [CrossRef]
  17. Haken, H. Brain Dynamics, 2nd ed.; Springer Series in Synergetics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  18. Grossberg, S. Adaptive pattern classification and universal recording I: Parallel development and coding of neural feature detectors. Biol. Cybernitics 1976, 23, 121–134. [Google Scholar] [CrossRef] [PubMed]
  19. Grossberg, S. Adaptive pattern classification and universal recording II: Feedback, expectation, olfaction, and illusion. Biol. Cybernitics 1976, 23, 187–202. [Google Scholar] [CrossRef]
  20. Gjerdingen, R.O. Categorization of musical patterns by selforganizing neuronlike networks. Music. Percept. 1990, 8, 339–370. [Google Scholar] [CrossRef]
  21. Briot, J.-P.; Hadjeres, G.; Pachet, F.-D. Deep Learning Techniques for Music Generation; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  22. Bader, R. (Ed.) Compuational Phonogram Archiving; Springer Series: Current Research in Systematic Musicology; Springer: Berlin/Heidelberg, Germany, 2019; Volume 5. [Google Scholar]
  23. Blass, M.; Fischer, J.; Plath, N. Computational Phonogram Archiving: A generic framework for knowledge discovery in music archives. Phys. Today 2020, 73, 50–55. [Google Scholar] [CrossRef]
  24. Kozma, R.; Freeman, W.J. (Eds.) Cognitive Phase Transitions in the Cerebral Cortex—Enhancing the Neuron Doctrine by Modeling Neural Fields; Springer Series Studies, System, Decision, and Control; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  25. Ohl, F.W. On the Creation of Meaning in the Brain—Cortical Neurodynamics during Category Learning. In Cognitive Phase Transitions in the Cerebral Cortex—Enhancing the Neuron Doctrine by Modeling Neural Fields; Kozma, R., Freeman, W.J., Eds.; Springer Series Studies, System, Decision, and Control; Springer: Berlin/Heidelberg, Germany, 2016; pp. 147–159. [Google Scholar]
  26. Ohl, F.W.; Scheich, H.; Freeman, W.J. Change in pattern of ongoing cortical activity with auditory category learning. Nature 2001, 412, 733–736. [Google Scholar] [CrossRef] [PubMed]
  27. Freeman, W. A Socitey of Brains; Psychology Press: London, UK, 2014. [Google Scholar]
  28. Bader, R. Impulse Pattern Formulation (IPF) Brain Model. arXiv 2022, arXiv:2212.11021. [Google Scholar]
  29. Bader, R. Nonlinearities and Synchronization in Musical Acoustics and Music Psychology; Current Research in Systematic Musicology; Springer: Berlin/Heidelberg, Germany, 2013; Volume 2. [Google Scholar]
  30. Linke, S.; Bader, R.; Mores, R. The impulse pattern formulation (IPF) as a model of musical instruments—Investigation of stability and limits. Chaos 2019, 29, 103109. [Google Scholar] [CrossRef] [PubMed]
  31. Linke, S.; Bader, R.; Mores, R. Modeling synchronization in human musical rhythms using Impulse Pattern Formulation (IPF). arXiv 2021, arXiv:2112.03218. [Google Scholar]
  32. Buzsáki, G. Rhythms of the Brain; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  33. Sawicki, J.; Hartmann, L.; Bader, R.; Schöll, E. Modelling the perception of music in brain network dynamics. Front. Netw. Physiol. 2022, 2, 910920. [Google Scholar] [CrossRef] [PubMed]
  34. Buhusi, C.V.; Meck, W.H. What makes us tick? Functional and neural mechanisms of interval timing. Nat. Rev. Neurosci. 2005, 6, 755–765. [Google Scholar] [CrossRef] [PubMed]
  35. Gerster, M.; Berner, R.; Sawicki, J.; Zakharova, A.; Škoch, A.; Hlinka, J.; Lehnertz, K.; Schöll, E. FitzHugh-Nagumo oscillators on complex networks mimic epileptic seizure-related synchronization phenomena. Chaos 2020, 30, 123130. [Google Scholar] [CrossRef] [PubMed]
  36. Omelchenko, I.; Omelćhenko, O.E.; Hövel, P.; Schöll, E. When nonlocal coupling between oscillators becomes stronger: Patched synchrony or multichimera states. Phys. Rev. Lett. 2013, 110, 224101. [Google Scholar] [CrossRef] [PubMed]
  37. McCrae, N.; Elliott, S. Spiritual experiences in temporal lobe epilepsy: A literature review. Br. J. Neurosci. Nurs. 2012, 8, 346–351. [Google Scholar] [CrossRef]
  38. Babbs, C.F. Quantitative Reappraisal of the Helmholtz-Guyton Resonance Theory of Frequency Tuning in the Cochlea. J. Biophys. 2011, 2011, 435135. [Google Scholar] [CrossRef] [PubMed]
  39. Liu, S.; White, R.D. Orthotropic material properties of the gerbil basilar membrane. J. Acoust. Soc. Am. 2008, 123, 2160–2171. [Google Scholar] [CrossRef] [PubMed]
  40. Engel, A.K.; Fries, P. Chap. 3-Neuronal Oscillations, Coherence, and Consciousness in the Neurology of Conciousness, 2nd ed.; Academic Press: Cambridge, MA, USA, 2016; pp. 49–60. [Google Scholar] [CrossRef]
  41. Freeman, W.J.; Quian Quiroga, R. Imaging Brain Function with EEG: Advanced Temporal and Spatial Analysis of Electroencephalographic Signals; Springer: New York, NY, USA, 2013. [Google Scholar]
Figure 1. Time series of cochlea input C t (blue) and system parameter g (yellow) over the 266 s of the musical piece One Mic by NAS, using N = 100 reflection points. Both time series correlate with 0.38.
Figure 1. Time series of cochlea input C t (blue) and system parameter g (yellow) over the 266 s of the musical piece One Mic by NAS, using N = 100 reflection points. Both time series correlate with 0.38.
Electronics 13 00362 g001
Figure 2. Frequency-dependent correlation of cochlea input with Kuramoto synchronization K C ( f ) . As expected, the gamma band around 50–80 Hz shows the strongest correlation, while the correlation decreases at lower and higher frequencies. Very low frequencies have negative correlations, pointing to a conversion of the system during low sound input time windows.
Figure 2. Frequency-dependent correlation of cochlea input with Kuramoto synchronization K C ( f ) . As expected, the gamma band around 50–80 Hz shows the strongest correlation, while the correlation decreases at lower and higher frequencies. Very low frequencies have negative correlations, pointing to a conversion of the system during low sound input time windows.
Electronics 13 00362 g002
Figure 3. Kuramoto order parameter K T ( f ) of 100 neural reflection points at a frequency of 50 Hz (yellow) and with a cochlea input C T (blue). Mainly, windows of strong cochlea input correspond to large Kuramoto synchronization and vice versa.
Figure 3. Kuramoto order parameter K T ( f ) of 100 neural reflection points at a frequency of 50 Hz (yellow) and with a cochlea input C T (blue). Mainly, windows of strong cochlea input correspond to large Kuramoto synchronization and vice versa.
Electronics 13 00362 g003
Figure 4. Spectra of the cochlea input C t and the system parameter g for N = 100. Peaks are present at about 50 Hz and 100 Hz. Strong amplitudes at very low frequencies point to the rhythm content of the musical piece.
Figure 4. Spectra of the cochlea input C t and the system parameter g for N = 100. Peaks are present at about 50 Hz and 100 Hz. Strong amplitudes at very low frequencies point to the rhythm content of the musical piece.
Electronics 13 00362 g004
Figure 5. Cochlea input C T and Kuramoto order parameter K T ( 2 Hz) for the N = 100 neuron reflection point case. During times of low cochlea input, synchronization is very strong, up to 0.8, pointing to convergence behavior of the system during times of low input at low frequencies, similar to temporal lobe epilepsy.
Figure 5. Cochlea input C T and Kuramoto order parameter K T ( 2 Hz) for the N = 100 neuron reflection point case. During times of low cochlea input, synchronization is very strong, up to 0.8, pointing to convergence behavior of the system during times of low input at low frequencies, similar to temporal lobe epilepsy.
Electronics 13 00362 g005
Figure 6. Scaling law of brain synchronization measured as Kuramoto order parameter K C ( f ) . In a log–log plot over a wide range of frequencies, a constant slope points to the scaling law of the IPF Brain model which is consistent with EEG data.
Figure 6. Scaling law of brain synchronization measured as Kuramoto order parameter K C ( f ) . In a log–log plot over a wide range of frequencies, a constant slope points to the scaling law of the IPF Brain model which is consistent with EEG data.
Electronics 13 00362 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bader, R. Modeling Temporal Lobe Epilepsy during Music Large-Scale Form Perception Using the Impulse Pattern Formulation (IPF) Brain Model. Electronics 2024, 13, 362. https://doi.org/10.3390/electronics13020362

AMA Style

Bader R. Modeling Temporal Lobe Epilepsy during Music Large-Scale Form Perception Using the Impulse Pattern Formulation (IPF) Brain Model. Electronics. 2024; 13(2):362. https://doi.org/10.3390/electronics13020362

Chicago/Turabian Style

Bader, Rolf. 2024. "Modeling Temporal Lobe Epilepsy during Music Large-Scale Form Perception Using the Impulse Pattern Formulation (IPF) Brain Model" Electronics 13, no. 2: 362. https://doi.org/10.3390/electronics13020362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop