sensors-logo

Journal Browser

Journal Browser

Explainable and Augmented Machine Learning for Biosignals and Biomedical Images

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensing and Imaging".

Viewed by 44378

Editors


E-Mail Website
Collection Editor
DICEAM Department, Mediterranea University of Reggio Calabria, Via Graziella Feo di Vito, 89060 Reggio Calabria, Italy
Interests: information theory; machine learning; deep learning; explainable machine learning; biomedical signal processing; brain computer interface; cybersecurity; computer vision; material informatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
School of Science and Technology, Nottingham Trent University, Clifton, Nottingham NG11 8NS, UK
Interests: brain informatics; data analytics; brain–machine interfacing; Internet of healthcare things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
Computer Science and Software Engineering, Auckland University of Technology, Auckland 1010, New Zealand
Interests: advanced AI development in neuroinformatics; neuromarketing; bioinformatics and cognitive computation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Collection Editor
Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy
Interests: instrumentation; measurement and sensors; biomedicine; environment; industry; nanotechnology; machine learning; photovoltaic panel aging
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

In recent decades, machine learning (ML) techniques have been providing encouraging breakthroughs in the biomedical research field, reporting outstanding predictive and classification performance.

However, ML algorithms are often perceived as black boxes with no explanation about the final decision process. In this context, explainable machine learning (XML) techniques intend to “open” the black box and provide further insight into the inner working mechanisms underlying artificial intelligence algorithms. Hence, the goal of XML is to explain and interpret outcomes, predictions, decisions, and recommendations automatically achieved by ML models in order to create more comprehensible and transparent machine decisions.

In medical application, such additional understanding, alongside the augmented availability of medical/clinical data acquired from even more interconnected biosensors (based on the Internet of Things (IoT) paradigm) as well as the recent advances in augmented techniques (e.g., generative adversarial network) able to generate synthetic samples, could play a significant role for clinicians, specifically, in the final human decision.

The proposed Topical Collection aims to collate innovative explainable ML-based approaches and augmented ML-based methodologies, as well as comprehensive survey papers, applied to problems in medicine and healthcare in order to develop the next generation of systems that can potentially lead to relevant advances in clinical and biomedical research.

Dr. Cosimo Ieracitano
Dr. Mufti Mahmud
Dr. Maryam Doborjeh
Dr. Aime' Lay-Ekuakille
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • Pattern recognition
  • Explainable machine learning
  • Explainable deep learning
  • Interpretability
  • Explainability
  • Classification
  • Augmented machine learning
  • IoT and biosensors
  • Sensing technology for biomedical applications
  • Biomedical signal processing
  • Biosignals (EEG, ECG, EMG, etc.)
  • Imaging technology for biomedical applications
  • Biomedical image processing
  • Biomedical images (MRI, RX, PET, etc.)

Published Papers (11 papers)

2023

Jump to: 2022, 2021, 2020

4 pages, 236 KiB  
Editorial
Editorial Topical Collection: “Explainable and Augmented Machine Learning for Biosignals and Biomedical Images”
by Cosimo Ieracitano, Mufti Mahmud, Maryam Doborjeh and Aimé Lay-Ekuakille
Sensors 2023, 23(24), 9722; https://doi.org/10.3390/s23249722 - 09 Dec 2023
Viewed by 785
Abstract
Machine learning (ML) is a well-known subfield of artificial intelligence (AI) that aims at developing algorithms and statistical models able to empower computer systems to automatically adapt to a specific task through experience or learning from data [...] Full article

2022

Jump to: 2023, 2021, 2020

37 pages, 11483 KiB  
Article
An Ensemble Approach for the Prediction of Diabetes Mellitus Using a Soft Voting Classifier with an Explainable AI
by Hafsa Binte Kibria, Md Nahiduzzaman, Md. Omaer Faruq Goni, Mominul Ahsan and Julfikar Haider
Sensors 2022, 22(19), 7268; https://doi.org/10.3390/s22197268 - 25 Sep 2022
Cited by 23 | Viewed by 5556
Abstract
Diabetes is a chronic disease that continues to be a primary and worldwide health concern since the health of the entire population has been affected by it. Over the years, many academics have attempted to develop a reliable diabetes prediction model using machine [...] Read more.
Diabetes is a chronic disease that continues to be a primary and worldwide health concern since the health of the entire population has been affected by it. Over the years, many academics have attempted to develop a reliable diabetes prediction model using machine learning (ML) algorithms. However, these research investigations have had a minimal impact on clinical practice as the current studies focus mainly on improving the performance of complicated ML models while ignoring their explainability to clinical situations. Therefore, the physicians find it difficult to understand these models and rarely trust them for clinical use. In this study, a carefully constructed, efficient, and interpretable diabetes detection method using an explainable AI has been proposed. The Pima Indian diabetes dataset was used, containing a total of 768 instances where 268 are diabetic, and 500 cases are non-diabetic with several diabetic attributes. Here, six machine learning algorithms (artificial neural network (ANN), random forest (RF), support vector machine (SVM), logistic regression (LR), AdaBoost, XGBoost) have been used along with an ensemble classifier to diagnose the diabetes disease. For each machine learning model, global and local explanations have been produced using the Shapley additive explanations (SHAP), which are represented in different types of graphs to help physicians in understanding the model predictions. The balanced accuracy of the developed weighted ensemble model was 90% with a F1 score of 89% using a five-fold cross-validation (CV). The median values were used for the imputation of the missing values and the synthetic minority oversampling technique (SMOTETomek) was used to balance the classes of the dataset. The proposed approach can improve the clinical understanding of a diabetes diagnosis and help in taking necessary action at the very early stages of the disease. Full article
Show Figures

Figure 1

16 pages, 2726 KiB  
Article
A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
by Cameron Severn, Krithika Suresh, Carsten Görg, Yoon Seong Choi, Rajan Jain and Debashis Ghosh
Sensors 2022, 22(14), 5205; https://doi.org/10.3390/s22145205 - 12 Jul 2022
Cited by 13 | Viewed by 3426
Abstract
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into [...] Read more.
Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation. Full article
Show Figures

Figure 1

2021

Jump to: 2023, 2022, 2020

12 pages, 702 KiB  
Article
Detection of Error-Related Potentials in Stroke Patients from EEG Using an Artificial Neural Network
by Nayab Usama, Imran Khan Niazi, Kim Dremstrup and Mads Jochumsen
Sensors 2021, 21(18), 6274; https://doi.org/10.3390/s21186274 - 18 Sep 2021
Cited by 7 | Viewed by 2829
Abstract
Error-related potentials (ErrPs) have been proposed as a means for improving brain–computer interface (BCI) performance by either correcting an incorrect action performed by the BCI or label data for continuous adaptation of the BCI to improve the performance. The latter approach could be [...] Read more.
Error-related potentials (ErrPs) have been proposed as a means for improving brain–computer interface (BCI) performance by either correcting an incorrect action performed by the BCI or label data for continuous adaptation of the BCI to improve the performance. The latter approach could be relevant within stroke rehabilitation where BCI calibration time could be minimized by using a generalized classifier that is continuously being individualized throughout the rehabilitation session. This may be achieved if data are correctly labelled. Therefore, the aims of this study were: (1) classify single-trial ErrPs produced by individuals with stroke, (2) investigate test–retest reliability, and (3) compare different classifier calibration schemes with different classification methods (artificial neural network, ANN, and linear discriminant analysis, LDA) with waveform features as input for meaningful physiological interpretability. Twenty-five individuals with stroke operated a sham BCI on two separate days where they attempted to perform a movement after which they received feedback (error/correct) while continuous EEG was recorded. The EEG was divided into epochs: ErrPs and NonErrPs. The epochs were classified with a multi-layer perceptron ANN based on temporal features or the entire epoch. Additionally, the features were classified with shrinkage LDA. The features were waveforms of the ErrPs and NonErrPs from the sensorimotor cortex to improve the explainability and interpretation of the output of the classifiers. Three calibration schemes were tested: within-day, between-day, and across-participant. Using within-day calibration, 90% of the data were correctly classified with the entire epoch as input to the ANN; it decreased to 86% and 69% when using temporal features as input to ANN and LDA, respectively. There was poor test–retest reliability between the two days, and the other calibration schemes led to accuracies in the range of 63–72% with LDA performing the best. There was no association between the individuals’ impairment level and classification accuracies. The results show that ErrPs can be classified in individuals with stroke, but that user- and session-specific calibration is needed for optimal ErrP decoding with this approach. The use of ErrP/NonErrP waveform features makes it possible to have a physiological meaningful interpretation of the output of the classifiers. The results may have implications for labelling data continuously in BCIs for stroke rehabilitation and thus potentially improve the BCI performance. Full article
Show Figures

Figure 1

14 pages, 7546 KiB  
Article
Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
by Iam Palatnik de Sousa, Marley M. B. R. Vellasco and Eduardo Costa da Silva
Sensors 2021, 21(16), 5657; https://doi.org/10.3390/s21165657 - 23 Aug 2021
Cited by 24 | Viewed by 3636
Abstract
Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this [...] Read more.
Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias. Full article
Show Figures

Figure 1

21 pages, 6148 KiB  
Article
A Clinically Interpretable Computer-Vision Based Method for Quantifying Gait in Parkinson’s Disease
by Samuel Rupprechter, Gareth Morinan, Yuwei Peng, Thomas Foltynie, Krista Sibley, Rimona S. Weil, Louise-Ann Leyland, Fahd Baig, Francesca Morgante, Ro’ee Gilron, Robert Wilt, Philip Starr, Robert A. Hauser and Jonathan O’Keeffe
Sensors 2021, 21(16), 5437; https://doi.org/10.3390/s21165437 - 12 Aug 2021
Cited by 28 | Viewed by 5470
Abstract
Gait is a core motor function and is impaired in numerous neurological diseases, including Parkinson’s disease (PD). Treatment changes in PD are frequently driven by gait assessments in the clinic, commonly rated as part of the Movement Disorder Society (MDS) Unified PD Rating [...] Read more.
Gait is a core motor function and is impaired in numerous neurological diseases, including Parkinson’s disease (PD). Treatment changes in PD are frequently driven by gait assessments in the clinic, commonly rated as part of the Movement Disorder Society (MDS) Unified PD Rating Scale (UPDRS) assessment (item 3.10). We proposed and evaluated a novel approach for estimating severity of gait impairment in Parkinson’s disease using a computer vision-based methodology. The system we developed can be used to obtain an estimate for a rating to catch potential errors, or to gain an initial rating in the absence of a trained clinician—for example, during remote home assessments. Videos (n=729) were collected as part of routine MDS-UPDRS gait assessments of Parkinson’s patients, and a deep learning library was used to extract body key-point coordinates for each frame. Data were recorded at five clinical sites using commercially available mobile phones or tablets, and had an associated severity rating from a trained clinician. Six features were calculated from time-series signals of the extracted key-points. These features characterized key aspects of the movement including speed (step frequency, estimated using a novel Gamma-Poisson Bayesian model), arm swing, postural control and smoothness (or roughness) of movement. An ordinal random forest classification model (with one class for each of the possible ratings) was trained and evaluated using 10-fold cross validation. Step frequency point estimates from the Bayesian model were highly correlated with manually labelled step frequencies of 606 video clips showing patients walking towards or away from the camera (Pearson’s r=0.80, p<0.001). Our classifier achieved a balanced accuracy of 50% (chance = 25%). Estimated UPDRS ratings were within one of the clinicians’ ratings in 95% of cases. There was a significant correlation between clinician labels and model estimates (Spearman’s ρ=0.52, p<0.001). We show how the interpretability of the feature values could be used by clinicians to support their decision-making and provide insight into the model’s objective UPDRS rating estimation. The severity of gait impairment in Parkinson’s disease can be estimated using a single patient video, recorded using a consumer mobile device and within standard clinical settings; i.e., videos were recorded in various hospital hallways and offices rather than gait laboratories. This approach can support clinicians during routine assessments by providing an objective rating (or second opinion), and has the potential to be used for remote home assessments, which would allow for more frequent monitoring. Full article
Show Figures

Figure 1

21 pages, 7858 KiB  
Article
Deep Learning of Explainable EEG Patterns as Dynamic Spatiotemporal Clusters and Rules in a Brain-Inspired Spiking Neural Network
by Maryam Doborjeh, Zohreh Doborjeh, Nikola Kasabov, Molood Barati and Grace Y. Wang
Sensors 2021, 21(14), 4900; https://doi.org/10.3390/s21144900 - 19 Jul 2021
Cited by 8 | Viewed by 4176
Abstract
The paper proposes a new method for deep learning and knowledge discovery in a brain-inspired Spiking Neural Networks (SNN) architecture that enhances the model’s explainability while learning from streaming spatiotemporal brain data (STBD) in an incremental and on-line mode of operation. This led [...] Read more.
The paper proposes a new method for deep learning and knowledge discovery in a brain-inspired Spiking Neural Networks (SNN) architecture that enhances the model’s explainability while learning from streaming spatiotemporal brain data (STBD) in an incremental and on-line mode of operation. This led to the extraction of spatiotemporal rules from SNN models that explain why a certain decision (output prediction) was made by the model. During the learning process, the SNN created dynamic neural clusters, captured as polygons, which evolved in time and continuously changed their size and shape. The dynamic patterns of the clusters were quantitatively analyzed to identify the important STBD features that correspond to the most activated brain regions. We studied the trend of dynamically created clusters and their spike-driven events that occur together in specific space and time. The research contributes to: (1) enhanced interpretability of SNN learning behavior through dynamic neural clustering; (2) feature selection and enhanced accuracy of classification; (3) spatiotemporal rules to support model explainability; and (4) a better understanding of the dynamics in STBD in terms of feature interaction. The clustering method was applied to a case study of Electroencephalogram (EEG) data, recorded from a healthy control group (n = 21) and opiate use (n = 18) subjects while they were performing a cognitive task. The SNN models of EEG demonstrated different trends of dynamic clusters across the groups. This suggested to select a group of marker EEG features and resulted in an improved accuracy of EEG classification to 92%, when compared with all-feature classification. During learning of EEG data, the areas of neurons in the SNN model that form adjacent clusters (corresponding to neighboring EEG channels) were detected as fuzzy boundaries that explain overlapping activity of brain regions for each group of subjects. Full article
Show Figures

Figure 1

39 pages, 8008 KiB  
Article
Machine Learning Methods for Fear Classification Based on Physiological Features
by Livia Petrescu, Cătălin Petrescu, Ana Oprea, Oana Mitruț, Gabriela Moise, Alin Moldoveanu and Florica Moldoveanu
Sensors 2021, 21(13), 4519; https://doi.org/10.3390/s21134519 - 01 Jul 2021
Cited by 10 | Viewed by 4407
Abstract
This paper focuses on the binary classification of the emotion of fear, based on the physiological data and subjective responses stored in the DEAP dataset. We performed a mapping between the discrete and dimensional emotional information considering the participants’ ratings and extracted a [...] Read more.
This paper focuses on the binary classification of the emotion of fear, based on the physiological data and subjective responses stored in the DEAP dataset. We performed a mapping between the discrete and dimensional emotional information considering the participants’ ratings and extracted a substantial set of 40 types of features from the physiological data, which represented the input to various machine learning algorithms—Decision Trees, k-Nearest Neighbors, Support Vector Machine and artificial networks—accompanied by dimensionality reduction, feature selection and the tuning of the most relevant hyperparameters, boosting classification accuracy. The methodology we approached included tackling different situations, such as resolving the problem of having an imbalanced dataset through data augmentation, reducing overfitting, computing various metrics in order to obtain the most reliable classification scores and applying the Local Interpretable Model-Agnostic Explanations method for interpretation and for explaining predictions in a human-understandable manner. The results show that fear can be predicted very well (accuracies ranging from 91.7% using Gradient Boosting Trees to 93.5% using dimensionality reduction and Support Vector Machine) by extracting the most relevant features from the physiological data and by searching for the best parameters which maximize the machine learning algorithms’ classification scores. Full article
Show Figures

Figure 1

21 pages, 4014 KiB  
Article
An Explainable Machine Learning Approach Based on Statistical Indexes and SVM for Stress Detection in Automobile Drivers Using Electromyographic Signals
by Olivia Vargas-Lopez, Carlos A. Perez-Ramirez, Martin Valtierra-Rodriguez, Jesus J. Yanez-Borjas and Juan P. Amezquita-Sanchez
Sensors 2021, 21(9), 3155; https://doi.org/10.3390/s21093155 - 01 May 2021
Cited by 21 | Viewed by 3216
Abstract
The economic and personal consequences that a car accident generates for society have been increasing in recent years. One of the causes that can generate a car accident is the stress level the driver has; consequently, the detection of stress events is a [...] Read more.
The economic and personal consequences that a car accident generates for society have been increasing in recent years. One of the causes that can generate a car accident is the stress level the driver has; consequently, the detection of stress events is a highly desirable task. In this article, the efficacy that statistical time features (STFs), such as root mean square, mean, variance, and standard deviation, among others, can reach in detecting stress events using electromyographical signals in drivers is investigated, since they can measure subtle changes that a signal can have. The obtained results show that the variance and standard deviation coupled with a support vector machine classifier with a cubic kernel are effective for detecting stress events where an AUC of 0.97 is reached. In this sense, since SVM has different kernels that can be trained, they are used to find out which one has the best efficacy using the STFs as feature inputs and a training strategy; thus, information about model explain ability can be determined. The explainability of the machine learning algorithm allows generating a deeper comprehension about the model efficacy and what model should be selected depending on the features used to its development. Full article
Show Figures

Figure 1

19 pages, 2234 KiB  
Article
A Novel Coupled Reaction-Diffusion System for Explainable Gene Expression Profiling
by Muhamed Wael Farouq, Wadii Boulila, Zain Hussain, Asrar Rashid, Moiz Shah, Sajid Hussain, Nathan Ng, Dominic Ng, Haris Hanif, Mohamad Guftar Shaikh, Aziz Sheikh and Amir Hussain
Sensors 2021, 21(6), 2190; https://doi.org/10.3390/s21062190 - 21 Mar 2021
Cited by 2 | Viewed by 2761
Abstract
Machine learning (ML)-based algorithms are playing an important role in cancer diagnosis and are increasingly being used to aid clinical decision-making. However, these commonly operate as ‘black boxes’ and it is unclear how decisions are derived. Recently, techniques have been applied to help [...] Read more.
Machine learning (ML)-based algorithms are playing an important role in cancer diagnosis and are increasingly being used to aid clinical decision-making. However, these commonly operate as ‘black boxes’ and it is unclear how decisions are derived. Recently, techniques have been applied to help us understand how specific ML models work and explain the rational for outputs. This study aims to determine why a given type of cancer has a certain phenotypic characteristic. Cancer results in cellular dysregulation and a thorough consideration of cancer regulators is required. This would increase our understanding of the nature of the disease and help discover more effective diagnostic, prognostic, and treatment methods for a variety of cancer types and stages. Our study proposes a novel explainable analysis of potential biomarkers denoting tumorigenesis in non-small cell lung cancer. A number of these biomarkers are known to appear following various treatment pathways. An enhanced analysis is enabled through a novel mathematical formulation for the regulators of mRNA, the regulators of ncRNA, and the coupled mRNA–ncRNA regulators. Temporal gene expression profiles are approximated in a two-dimensional spatial domain for the transition states before converging to the stationary state, using a system comprised of coupled-reaction partial differential equations. Simulation experiments demonstrate that the proposed mathematical gene-expression profile represents a best fit for the population abundance of these oncogenes. In future, our proposed solution can lead to the development of alternative interpretable approaches, through the application of ML models to discover unknown dynamics in gene regulatory systems. Full article
Show Figures

Figure 1

2020

Jump to: 2023, 2022, 2021

29 pages, 8450 KiB  
Article
Interpretability of Spatiotemporal Dynamics of the Brain Processes Followed by Mindfulness Intervention in a Brain-Inspired Spiking Neural Network Architecture
by Zohreh Doborjeh, Maryam Doborjeh, Mark Crook-Rumsey, Tamasin Taylor, Grace Y. Wang, David Moreau, Christian Krägeloh, Wendy Wrapson, Richard J. Siegert, Nikola Kasabov, Grant Searchfield and Alexander Sumich
Sensors 2020, 20(24), 7354; https://doi.org/10.3390/s20247354 - 21 Dec 2020
Cited by 13 | Viewed by 4753
Abstract
Mindfulness training is associated with improvements in psychological wellbeing and cognition, yet the specific underlying neurophysiological mechanisms underpinning these changes are uncertain. This study uses a novel brain-inspired artificial neural network to investigate the effect of mindfulness training on electroencephalographic function. Participants completed [...] Read more.
Mindfulness training is associated with improvements in psychological wellbeing and cognition, yet the specific underlying neurophysiological mechanisms underpinning these changes are uncertain. This study uses a novel brain-inspired artificial neural network to investigate the effect of mindfulness training on electroencephalographic function. Participants completed a 4-tone auditory oddball task (that included targets and physically similar distractors) at three assessment time points. In Group A (n = 10), these tasks were given immediately prior to 6-week mindfulness training, immediately after training and at a 3-week follow-up; in Group B (n = 10), these were during an intervention waitlist period (3 weeks prior to training), pre-mindfulness training and post-mindfulness training. Using a spiking neural network (SNN) model, we evaluated concurrent neural patterns generated across space and time from features of electroencephalographic data capturing the neural dynamics associated with the event-related potential (ERP). This technique capitalises on the temporal dynamics of the shifts in polarity throughout the ERP and spatially across electrodes. Findings support anteriorisation of connection weights in response to distractors relative to target stimuli. Right frontal connection weights to distractors were associated with trait mindfulness (positively) and depression (inversely). Moreover, mindfulness training was associated with an increase in connection weights to targets (bilateral frontal, left frontocentral, and temporal regions only) and distractors. SNN models were superior to other machine learning methods in the classification of brain states as a function of mindfulness training. Findings suggest SNN models can provide useful information that differentiates brain states based on distinct task demands and stimuli, as well as changes in brain states as a function of psychological intervention. Full article
Show Figures

Figure 1

Back to TopTop