Next Article in Journal
The Size Dependence of Microwave Permeability of Hollow Iron Particles
Next Article in Special Issue
Improving EEG-Based Driver Distraction Classification Using Brain Connectivity Estimators
Previous Article in Journal
Development of a Soft Sensor for Flow Estimation in Water Supply Systems Using Artificial Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI Prediction of Brain Signals for Human Gait Using BCI Device and FBG Based Sensorial Platform for Plantar Pressure Measurements

by
Asad Muhammad Butt
1,*,
Hassan Alsaffar
2,3,
Muhannad Alshareef
2 and
Khurram Karim Qureshi
4,5
1
College of Chemicals & Materials, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
2
Electrical Engineering Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
3
Physics Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
4
Optical Communications and Sensors Laboratory (OCSL), Electrical Engineering Department, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
5
Center for Communication Systems & Sensing, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(8), 3085; https://doi.org/10.3390/s22083085
Submission received: 23 February 2022 / Revised: 30 March 2022 / Accepted: 11 April 2022 / Published: 18 April 2022
(This article belongs to the Special Issue Wearable Medical Sensors and Artificial Intelligence)

Abstract

:
Artificial intelligence (AI) in developing modern solutions for biomedical problems such as the prediction of human gait for human rehabilitation is gaining ground. An attempt was made to use plantar pressure information through fiber Bragg grating (FBG) sensors mounted on an in-sole, in tandem with a brain-computer interface (BCI) device to predict brain signals corresponding to sitting, standing and walking postures of a person. Posture classification was attained with an accuracy range between 87–93% from FBG and BCI signals using machine learning models such as K-nearest neighbor (KNN), logistic regression (LR), support vector machine (SVM), and naïve Bayes (NB). These models were used to identify electrodes responding to sitting, standing and walking activities of four users from a 16 channel BCI device. Six electrode positions based on the 10–20 system for electroencephalography (EEG) were identified as the most sensitive to plantar activities and found to be consistent with clinical investigations of the sensorimotor cortex during foot movement. A prediction of brain EEG corresponding to given FBG data with lowest mean square error (MSE) values (0.065–0.109) was made with the selection of a long-short term memory (LSTM) machine learning model when compared to the recurrent neural network (RNN) and gated recurrent unit (GRU) models.

1. Introduction

Human gait and other clinical investigations related to human biomechanics are essential to the well-being of patients suffering from impediments in locomotion. This can arise due to numerous circumstances such as injuries, neurological disorders [1] and adaptation to prosthetic devices in case of amputation. Such investigations are not only useful in human rehabilitation but also support the understanding for developing robots that can mimic human motions and maintain gait stability while performing various human like activities such as sitting, standing, walking, running and jumping. The human gait is illustrated in Figure 1, where successive actions to support walking while maintaining contact of the foot with the ground are shown. Other human postures such as sitting and standing are also established using foot contact with the ground. The perception of these postures in the brain through pressure profiles on the sole (plantar pressure) developed during various contact scenarios enables stable gait.
Considerable research has been done in the area of plantar pressure investigations [2,3,4,5]. The human foot sole has plantar nerves that carry information to the brain motor cortex, where it is perceived and interpreted for an adequate response. In the case of foot amputation, the sensory loss disables the patient’s perception of foot contact and develops psychological trauma [6]. A large number of patients who undergo amputation in the lower extremities are those with vascular diseases such as diabetes [7]. The lower extremity loss could mean the loss of a foot or the entire leg. The annual number of amputees in the lower extremities in the US alone is around 185,000 as determined by the Amputee Coalition of the USA. In the Middle East and North Africa (MENA) region, Saudi Arabia has the highest rate of lower limb amputation owing to diabetes [8]. To improve the quality of life of the amputees and alleviate the psychological trauma that the patients suffer, lifelike prosthetic attachment to the body plays an important role.
The current state of the art of prosthetic devices [9,10,11] has created numerous opportunities for researchers to venture into areas where using miniature sensors that can be embedded into in-soles can help determine plantar pressures [1,12,13]. One such sensor is the fiber Bragg grating (FBG), which in recent times has found its applications in novel medical applications [14,15,16]. FBGs are also being implemented in plantar pressure measurement systems [1,17]. The advantages of FBGs lie in their miniature size, sensitivity, immunity to electromagnetic interference (EMI) and resistance to harsh conditions once applied with adequate protective coating [18].
Artificial intelligence (AI) has taken the research in biomedical applications to new heights by providing highly accurate diagnosis and prognosis in the presence of sufficient patient data [19]. The involvement of AI in human gait prediction [20,21] allows lesser efforts and accurate diagnostics for the identification of the patient’s disease(s) and suggestions for corrective actions. Other than plantar pressure information, brain electroencephalography (EEG) also provides a valuable insight into the patient’s perception of touch (in this case, touch of the foot). BCI devices have been used in recent research to study and monitor brain activity using EEG signals [22]. The EEG signal analysis has been widely studied in the last decade. Many machine learning methods, such as SVMs, KNNs [23], artificial neural networks (ANNs), deep neural networks (DNNs) [24], etc. have been presented for EEG signal processing. In addition, many new signal processing techniques such as [25,26] have been used for EEG signal analysis. Companies such as Neuralink are unraveling the mysteries of the human mind through implantable brain computer interfaces.
Haptic classifications [27] are also important but the understanding of primary contact scenarios through plantar pressure variation is vital. In case of sensory loss [28], the brain motor cortex retains the memory of the pre-amputation sensorial feedback [29] and if understood properly, can be reconstructed through the plantar pressure of the able foot. Research into a comprehensive product that can enable patients to regain sensory feelings through plantar pressure measurements, BCI and the use of AI will have a substantial impact on society and provide opportunities for technology entrepreneurs to venture into biomedical applications.
Thus, an investigation was conducted to design a process for estimating EEG signals through plantar pressure using FBG sensors. Plantar pressure and FBG data were collected from four men, aged 23 to 24 years. The participants filled out a consent form agreeing to provide their clinical data. The experiments were designed in two parts. In the first part, two participants wore a BCI device that is head mounted and performed actions of a. sitting, b. standing, and c. walking. This experiment was conducted to perform the classification of postures from the incoming data of the BCI device. The classification of activities is followed by a second part of the experiment, where four participants perform the sitting, standing and walking activities to collect enough data samples to run a machine learning model for predicting the brain EEG signal in a specific posture. The four participants wore an FBG mounted insole with FBGs located at three high pressure points in the plantar region, in addition to the head mounted BCI device for recording the EEG signals.
The paper is organized in the following manner: Section 1 introduces the topic and the necessity of the research in the specific domain. Section 2 discusses the sensors used, their placement and the importance of the BCI for this research. Section 3 explains the test setup focusing on the devices used and data collection from the FBG sensors and the BCI. Section 4 details the AI implementation with insight into experiment design, preparation for data recording and utilization of various machine learning models for the two parts of the experiment (as discussed earlier), while providing results and discussion along with the conclusion in Section 5. Finally, Section 6 provides recommendations for future work.

2. FBG Sensors and BCI Device

Two different types of sensors were used in this work. FBG sensors to collect plantar pressure data and electrodes mounted on the headset of a BCI device to collect the EEG signals. A description of the FBG sensor and the BCI device is explained further.

2.1. FBG Sensor

Fiber Bragg grating (FBG) sensors are based on optical fiber technology, where a specific region in the fiber (usually 10–20 mm along fiber length) is etched by ultraviolet (UV) radiation to produce gratings with a specific period. When light passes through the fiber, the grating period provides a distinctive reflective wavelength signature of the fiber in the etched section under Bragg’s law and responds to changes in pressure and temperature of the fiber with a shifting wavelength on an optical interrogator display as shown in Figure 2.
The FBGs are small in size (250 μm diameter) and sometimes coated with the protective layer to protect from harsh environments, as shown in various characterization studies [30].

2.2. Brain Computer Interface (BCI) Device

Brain mapping of various human activities can be produced through magnetic resonance imaging (MRI) [29,31], computer tomography (CT) scans and BCI devices [32,33] that pick up electrical impulses generated by the neuron firing in the brain through electrodes positioned over the head scalp of the patient. Many modern BCI devices use the standard 10–20 electrode placement for EEG examination. The 10–20 system or International 10–20 system is an internationally recognized method to describe and apply the location of scalp electrodes as shown in Figure 3. The BCI device used in current research is produced by OpenBCI, which creates tools for biosensing and neuroscience. The particular BCI contains 16 electrodes providing 16 channels with Cyton + Daisy biosensing board and an Ultracortex Mark IV EEG headset that comes pre-assembled as shown in Figure 4. The brain activity during foot movement has been previously cited in [34,35] and is shown in Figure 5.
The objective of this research is to study brain signals and plantar pressure, analyze, and predict them in order to facilitate a healthy gait cycle for patients, including the following:
  • Designing an automated process that can improve on gait analysis data and provide valuable insight for the development of smart prosthetics in the future.
  • Incorporating patient’s feedback using plantar pressure sensors and brain signals for correction and calibration.
  • Creating a classification model that identifies brain activity related to plantar pressure profiles and predicts brain activities signals based on monitored plantar pressure.

3. Experimental Setup

This section provides the details of the experimental setup utilized in the study. Before advancing into data collection, it was important to identify the sensor positioning. It is of critical importance to map an accurate ground contact scenario through the measurement of plantar pressure in key regions of the foot sole. The foot sole can be divided into numerous regions as shown in Figure 6.
Many researchers have introduced piezoresistive sensors at eight different locations [36], photo-resistive at six [37], and FBG sensors at five locations on the in-sole [38]. Our work incorporates three stand-alone FBG sensors placed at (i) the juncture of M02 and M04, (ii) the juncture of M03 and M04, and (iii) the juncture of M07 and M08 as shown in Figure 7a. The selection of these points come from previous clinical investigations of young adults [39]. The pressure map created with an ink pad is shown in Figure 7b for each participant. The clinical data correlation to the information provided by the participants such as age, weight, height, and foot size is shown in Table 1. A MATLAB script was prepared to locate the positions of sensor placement for all of the four participants. Later on, FBG data were collected from four sets of locations slightly different for each individual to capture maximum pressure values based on the variation in Table 1.
The experimental setup consists of three FBG sensors connected to a broadband light source (1510–1590 nm) and a compact optical interrogator IMON USB 512 produced by IBSEN Photonics. All the devices are connected through an optical circulator, as shown in Figure 8.
The FBG sensors were placed on a 5 mm thick polyethylene foam insole, as shown in Figure 9a. The FBGs were located at customized locations based on the user’s foot size, weight, and height. The in-sole prototype equipped with 3 × FBGs were glued in place in a wearable sandal arrangement with a velcro strap-on for multi-user design, as shown in Figure 9b. One of the participants can be seen in Figure 9c wearing the BCI headset and the FBG installed insole. Data visualization is available through two graphical user interfaces (GUIs), (i) an Ibsen Evaluation Software for FBG wavelength monitoring and (ii) an OpenBCI System Control Panel as shown in Figure 10. In Figure 10a, the GUI for the FBG interrogator allows the user to monitor and plot the connected FBGs identified by their signature wavelengths. While selecting FBGs, it is important that the wavelength shift indicative of the FBG response to applied pressure does not overlap with other FBG wavelength shifts. The optical interrogator registers the peaks for each FBG by its corresponding wavelength signature. In our case, the three FBGs had wavelength values of 1535.068 nm, 1539.966 nm, and 1545.341 nm corresponding to midfoot, heel, and toe areas of the foot region. Figure 10b shows the GUI for the BCI device, where brain activity registered by the 16 electrode channels are shown separately in the left part of the GUI and collectively in the right top corner (Amplitude vs. Frequency-FFT plot). The amplitude values, i.e., microvolt (μV) readings, were used as the data to interpret the brain’s response to various foot movements during posture change and walking action. The bottom right corner of the GUI window shows the head plot indicating the activity region in the brain corresponding to various body actions of the user and the 16-electrode positioning on the user’s head.

4. AI Implementation and Results

In this section, various classification and machine learning (ML) models were implemented to classify and predict BCI signals from a set of experiments involving participants wearing the BCI device and the FBG installed insole. The experiments were performed in two parts, namely Part I—BCI Classification and Part II—BCI Prediction. As explained earlier, the purpose of the first part of the experiment was to identify the electrodes on the BCI headset that are more responsive to foot movement. Later in the second part, BCI signals were predicted against a random plantar pressure information. The AI was implemented using Python version 3.10.4 (open source software) and pre-processing of the data was performed in MATLAB version 2021b. For clarity, the experiments and their AI implementation are presented in two parts.

4.1. Part I—BCI Classification

In this part, the collection of EEG data was performed from the head mounted BCI device. The data was collected from two participants. The purpose was to perform a classification of the incoming data from the BCI device in the a. sitting, b. standing, and c. walking postures. This helped identify the BCI signals directly associated with the foot movement with a reduced number of channels compared to the original 16.

4.1.1. BCI Classification Experiment

The experimental setup consisted of wearing the BCI device and performing the three gait positions, where the participant sits in a chair of height 50 cm and rests his feet on the ground. For standing, while maintaining a good posture, the participant stands with minimal movement. Finally, the walking gait is recorded by walking in a straight path for 60 s. The schematic for the experiment is shown in Figure 11.
Before creating machine learning models, the data were split randomly into two datasets. The first set contains 80% of the data set and is called a training set in which the classification technique extracts information to build a model. The second set was used to test the model extracted from the training set, called the testing set. Four classification models, namely 1. K-nearest neighbor (KNN), 2. support vector machine (SVM), 3. logistic regression (LR), and 4. naïve Bayes (NB) were used. The dataset was preprocessed before using the classification models. The classification result also helped to identify the key electrode positions responsible for the foot activity. A reduced number of BCI channels eased the burden on correlation during the second part of the experiment. The process of classification is summarized in Figure 12.
The classification dataset was recorded from two participants for three different gait positions: sitting, standing, and walking. The brain activity was collected for each participant for ten trials, each for 60 s in the three gait positions. Each trial has data from 16 electrodes sensing the brain activity, which provides 16 signals in each trial. Hence, the total number of trials was 59 (one standing file got corrupted). Each signal has 7500 data points since the sampling frequency was 125 Hz. Furthermore, the data were reorganized such that all the signals from all trials for one electrode were in one data file, resulting in 16 data files. Hence, the brain activity signal was used as the only feature for the classification model. The output variable of the dataset is the gait position; therefore, these outputs were encoded as 0 for sitting, 1 for standing, and 2 for walking.

4.1.2. BCI Data Pre-Processing

The dataset was preprocessed by removing the first and last 5 s for stability purposes, detrended (removing the mean of each channel), normalized using the z-score method, and low passed at 60 Hz to remove the 60 Hz main noise. From the BCI data, the difference between sitting and walking is not apparent in some channels, which means that brain signals do not have a one-to-one mapping to sensory perception, making it harder to classify or use machine learning models since the signal is a composition of different factors; however, several models can be used to obtain decent accuracy, e.g., K-NN and Naïve Bayes, only for the channels that are relatively sensitive to plantar pressure.
A sample of 14 s EEG data was investigated for the three postures for each participant as shown in Figure 13. Two main methods were used to denoise the signal. Method 1 applied normalization followed by detrending and application of a low pass filter. In method 2, detrending is applied using wavelet decomposition. These methods are a part of preprocessing data before their use in machine learning models.
As an example, the BCI EEG 16 channel data showing 14 s of walking activity before and after preprocessing by method 1 can be seen in Figure 13 and Figure 14. Normalizing involved normalizing the signals using the z-score so the magnitude difference between the tests would be negligible. This also results in removing the DC offset of the channels. Instead of using a high-pass filter at 0.1 Hz, detrending removes the mean of each channel. Detrending was used since it can improve the accuracy of machine learning, as in case study [40]. Although not perfect, this was better than introducing artifacts caused by interpolation detrending. Finally, a low-pass filter was applied to remove the line noise at 60 Hz.
Method 2 using the wavelet decomposition allowed detrending the signal in a much more effective way; as seen in [40], a 4-level sym4 was used instead of db4 due to better performance according to [41], while [42] discussed the use of wavelet transforms for EEG classification.
The level 1 detailed coefficient contains the 60 Hz line noise as shown in Figure 15, while the approximation contains the low-frequency trend of the signal, as discussed in the normal pre-processing. The trend and the line noise were removed by removing both these coefficients, as shown in Figure 16. Fourier transform comparison of channel 16 before and after wavelet processing is shown in Figure 17.

4.1.3. Classification Models

For the classification models, k-fold cross-validation was used on the training set, where the training set was divided into k groups. The data was trained using k-1 groups, and the remaining group was used to test the performance of the model before trying it on the test set. The test set was only allowed to be used once after trying to obtain the best model possible using k-fold cross-validation. The value of ‘k’ was chosen to be 4.
After encoding the data into three classes, and splitting the data into training and testing, four classification algorithms were used in our study, which were provided by the Sklearn library. Each electrode datafile had its own classifier using different techniques.

K-Nearest Neighborhood (KNN)

This method compares a test tuple and similar training tuples, which are described by n features. In n-dimensional space, each training tuple corresponds to a point. Thus, all training tuples are stocked in n-dimensional pattern space. The classifier searches the pattern for k training tuples that are closest to the unknown tuple, which determines the class of the unknown tuple. K training tuples are called k nearest neighbors [43].
In our models, the classifiers were optimized based on the distance equation, the number of neighbors considered, and the weight of each sample in the space. Four methods for distance calculation were considered: Euclidean, Manhattan, Chebyshev, and Minkowski. All three variables were chosen to obtain the highest accuracy model for each channel’s classifier. Hence, the number of k neighbors differs for each channel classifier. Also, a few of the classifiers had the best performance when the closest neighbors have a higher influence on the classification decision. In contrast, others performed better when all neighbors had the same influence on the classification output. Finally, most of the classifiers used the Chebyshev equation for distance to obtain the best performance.

Support Vector Machine (SVM)

A support vector machine is used to classify linear and non-linear data. Training data can be transformed into higher dimensions to be able to find an optimal linear hyperplane in the new dimension using non-linear mapping. A hyperplane separates the data of each class from the data of the other classes. Support vectors and margins are the main methods to find this hyperplane [44].
The classifiers were optimized based on the regularization parameter (slack variable C), kernel equation, and kernel coefficient, i.e., gamma. The regularization parameter is a control on the fitting parameters that provides a penalty on the cost function to reduce overfitting. It ranged from 10−10 to 104 and each classifier had a different value to maximize the performance. This means that the slack variable C determines how many samples can be inside the margin and wrongly classified. Moreover, four types of kernel equations were used: linear, polynomial, radius bias function (RBF), and sigmoid. The kernel equation was determined by the accuracy of the classifier. The gamma determines the decision region (the radius) of the vector, and it does not apply to the linear kernel. However, it varied from 10−15 to 104 for polynomial, RBF, and sigmoid kernels. Finally, the value of each variable was determined by the best-fitting models for each electrode classifier to obtain the highest accuracy, e.g., for channel 2: C = 1 and gamma = 0.01 and for channel 6: C = 100 and gamma = 0.01.

Logistic Regression (LR)

Another powerful supervised machine learning algorithm for classification is logistic regression [45]. It models a discrete output given a continuous input variable. It is usually used for a binary output, but it can be used for multinomial outputs. It is simply a linear regression model with a logistic function that bounds the output between 0 and 1, and it does not require a linear relationship between inputs and outputs. This type of multi-class logistic regression or multinomial regression is known as SoftMax regression.
The slack variable C was the main hyperparameter obtained to maximize the performance of the classifiers. Another possible parameter to optimize was the solver (the algorithm used to optimize the model fitting). However, due to the nature of the dataset, only the linear solver was used.

4.1.4. Performance Measure

The performance measure for the first experiment is accuracy and balanced accuracy. These two measures compare the classifier’s ability to choose a suitable class. The accuracy is the ratio between the number of the current prediction to the total number of predictions and is given by Equation (1):
Accuracy = TP + TN TP + FP + TN + FN
where TP, TN, FP, and FN are explained in Figure 18. The balanced accuracy is a metric used when the classes are imbalanced since each class has an equal contribution to the result. Due to the low number of samples, and randomness of choosing the Train, Val, and Test sets, the balanced accuracy provides more reliable results than the accuracy and is calculated by Equation (2):
Balanced   Accuracy = Sensitivity + Specificity 2
where sensitivity is given by Equation (3):
Sensitivity = TP TP + FN
and specificity is given by Equation (4):
Specificity = TN TN + FP
The classification model was run on 59 files for raw sampled data and preprocessed BCI data (one of the 60 files was corrupted), and Table 2 provides the accuracy results for each channel and classification model utilized. In addition, BCI channels directly correlated to plantar foot pressure were acquired as a result of this classification and included in the machine learning model.
The channels with the best accuracy for the results shown in Table 2, are 2, 5, 6, and 9 by K-NN (K-nearest neighbors) and 6, 9, 11, and 12 by naive Bayes, as shown in Figure 19. The K-NN model performed better on the raw data than the preprocessed data because it depends on the distance between the point and its neighbors. The distance between points in the raw data is untouched, while preprocessed data distances are changed because of detrending. The gait posture influences channels 2, 5, 6, and 9 since they have better accuracy than the rest of the channels and must be considered in the second part of the experiment. Moreover, detrending the data removes the average value of each signal. The data variance plays a prominent role in the NB classifier on the preprocessed data.
On the other hand, the data affected by the gait posture has smaller variance. Thus, it has higher NB classifier accuracy as in channels 6, 9, 11, and 12. Therefore, the raw and preprocessed data results were carried to part II since they provided the channels that are sensitive to gait posture from different perspectives to ensure all possibilities.
For verification purposes, the findings of the classification model were compared to other literature. For example, it agrees with the recorded brain activity by magnetic resonance imaging (MRI) of participants who were supine and had a force applied on their sole [47].

4.2. Part II—BCI Prediction

In this part, the plantar pressure data collection from the FBG mounted insole was performed with FBGs located at three high-pressure points in the plantar region. The data were in addition to the BCI data simultaneously recorded from four participants while performing sitting, standing, and walking activities to collect enough data samples to run a machine learning model and predict the brain EEG signal in a specific posture.

4.2.1. BCI Prediction Experiment

The number of trials conducted were 10 per participant with 60 s assigned for the BCI data collection (similar to the first part) in order to stabilize the input data and 40 s (14 s for walking, limited by the walking path due to the optical fiber length constriction) assigned for FBG and BCI data collection for the second part of the experiment. It was ensured that the time stamp and initiation of signals for each activity, i.e., sitting, standing, and walking, matched for BCI and FBG. The sampling rate for both data streams was 125 Hz whereas the in-built capability of the Interrogator was 3 kHz and of the BCI was 250 Hz. The schematic for the experiment is shown in Figure 20, in addition to the BCI data collection as discussed in Section 4.1.1. The data from both FBG sensors and the BCI device were investigated for any outliers and/or artifacts that influence the results of the classification and prediction studies. The following provides detail on the instructions to participants while recording data in sitting, standing, and walking postures.
The participant sat in a chair of height 50 cm while wearing the BCI device correctly and resting his feet on a board where the sole is attached after taking off his shoes. This setup was designed to take the sitting data for 60 s for each of the four participants, each repeated 10 to 12 times for a total of 43 sitting data files. For the standing data, 60 s were recorded, and repeated 10 to 15 times for a total of 51 files. The walking data were recorded for 15 s repeated 20 times for each of the three participants for a total of 60 files, where the path allows for an average of 14 steps. Figure 21 illustrates (a) walking, (b) standing, and (c) sitting postures of the participants while wearing equipment.
The data obtained from the BCI and FBG were cleaned and preprocessed for the final stage of correlation. A machine learning model fitted the 3 FBG signals to selected BCI channels chosen by reading the literature and the results of previous experiments on channel classification. After reducing the number of BCI channels and preprocessing both the BCI and FBG data, a neural network model was used to predict the BCI channel waveform using the three FBG signals. Three models were considered in this project: Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), and Gated Recurrent Unit (GRU). In Figure 22, the signal flow diagram for the machine learning process is summarized.

4.2.2. BCI Data Pre-Processing

For the 40 s sitting and standing data, the first and last 5 s were removed for stability purposes. Furthermore, the 30 s file was divided into two files of 15 s each to match the length of the 15 s walking data. However, after examining the walking data, another second from the beginning of each file had to be removed to keep a clean signal. The new dataset now was made up of 14 s files. The dataset now consisted of 84 sitting data files, 102 standing data files, and 60 walking data files. Wavelet decomposition was used to remove noise and signal trends represented by the wavelet approximation, and the first detail, “sym2” wavelet was used for the decomposition.

4.2.3. FBG Data Pre-Processing

For the FBG data, minimal preprocessing was required since the optical sensors do not pick up noise from the surroundings, but outliers had to be corrected as seen in Figure 23. Some outliers were caused by a sudden zero reading in the channel. In contrast, others were caused by the data from one channel being recorded in a different channel. The outliers were detected using the generalized extreme Studentized deviate test for outliers (gesd) and filled using shape-preserving piecewise cubic spline interpolation (pchip). Some files that had a large number of zeros had to be processed by the median detection method. After cleaning the data, the files were normalized by finding the maximum and minimum points of the three waveforms. While normalization is often applied as a part of data preparation for machine learning, its purpose is to change the values of numeric columns in the dataset to a common scale without distorting differences in the ranges of values. The data were normalized to [0, 1] using min-max scaling as also reported in [48].
To match the length of the BCI dataset, the first second from each file was also removed to form 14 s files matching the BCI data files. Finally, both FBG and BCI Raw, Wavelet, and Processed data were combined in one file with 1750 samples and a sampling rate of 125. After that, a total of 250 files for each data set were ready to be inserted into the machine learning model.
The participants’ focus and immobility are important during collecting the data for the pressure profile. Since the FBG sensors are highly sensitive and may detect any movement, the recording is useless if the person is moving or not implementing the correct posture.
However, there is no way to guarantee consistent results every time; as shown in Figure 23, the pressure data for the same person’s posture are obtained consecutively, yet some variances may be noticed. Furthermore, these data show similar behavior as expected, e.g., for standing, maximum pressure is on the middle of the foot, then on the heel, and lastly on the toes, which is the case for almost all the recorded data. However, the exact values can be different. These values represent the wavelength shift of the FBGs original wavelength, which is given in Equation (5) as:
Δ λ = λ o r i g λ m e a s
where Δ λ is wavelength shift, λ o r i g is the original FBG wavelength and λ m e a s is the measured wavelength of the FBG. This value is affected by many factors, for example, participant’s weight, FBG positions, surface material and specifications, and participant’s posture. To minimize the error of the machine learning model, all the collected data of the FBGs are normalized using Equation (6):
Δ λ n o r m = λ m e a s m i n   o f   3   w a v e f o r m s   m a x   o f   3   w a v e f o r m s m i n   o f   3   w a v e f o r m s
where Δ λ n o r m is the normalized wavelength shift. Figure 24, Figure 25, Figure 26 show the three normalized waveforms of sitting, standing, and walking, respectively for a single participant between two trials. The results show significant consistency after the normalization was applied. Additionally, many outliers were also corrected during the course of extracting FBG data.

4.2.4. Performance Measure

In the second part of the experiment, deep learning algorithms were used to create models for brain activity electrodes. To optimize each model, a loss function was required to verify the accuracy of prediction. Hence, the mean square error (MSE) function was used to maximize the prediction accuracy of the models. The MSE function is given by Equation (7):
MSE = 1 n i = 1 n ( y i y ^ i ) 2
where n is the number of samples, y i is the targeted value, and y ^ i is the predicted value.
The deep learning algorithms built are RNN, LSTM, and GRU. The detailed schematics of the three algorithms are shown in Figure 27. All have been optimized with an Adam optimizer using the MSE loss function. The machine learning model considered data provided by three different types of processing methods: raw data, processed data, and wavelet data.
Raw data are not recommended for use in most machine learning models, which was verified in this model by the extremely high training and validation MSE results they produced. Figure 28 shows the plot of the values of the training and validation MSE of LSTM, RNN, and GRU models, respectively, for the processed data in channel 6, whereas Figure 29 shows the plot of the values of the training and validation MSE of LSTM, RNN, and GRU models by wavelet decomposition, respectively, in channel 9.
It was observed that the LSTM has the least error and requires the least number of epochs to reach a steady state. Figure 30 compares the original and predicted BCI signal at channel 2 as a sample based on unknown FBG data. Figure 31 provides a close-up at two instances to clearly show the resemblance between the original and the predicted signal.

5. Conclusions

A deep learning model that can predict brain activity signals for 6 electrodes of the 16 available from the BCI device using plantar pressure was built and tested. The deep learning algorithms built are RNN, LSTM, and GRU. All have been optimized using the Adam optimizer with MSE loss function. The raw data showed the worst model performance, as expected from the literature on deep learning performance. The processed data (normalized, detrended, filtered) performed well with a minimum MSE of 0.0877 for channel 12. The wavelet decomposed data have the best overall performance with 0.0757, 0.0919, 0.1093, 0.065, 0.784, and 0.093 MSE values for electrodes 2, 5, 6, 9, 11, and 12, respectively. The wavelet decomposed data have lower MSE for all electrodes, except electrode 12.
The six electrodes were chosen based on classification models, in which BCI data were collected for three posture positions (sitting, standing, and walking). In the classification models, the BCI data were organized by channel. Hence, there is a classifier for every channel for each classification algorithm (K-NN, SVM, logistic regression, and naïve Bayes). Furthermore, the classifiers were tested on raw data and processed data (normalized, detrended, and filtered). The best performing posture classifiers are for channel 2, 5, 6, 9, 11, and 12 with accuracy of 92%, 81%, 85%, 93%, 92%, and 87%, respectively. Moreover, the plantar pressure was obtained using three FBG sensors on the maximum pressure points in the foot.
While the final aim of the research was to predict brain signals from given plantar pressure information, the classification accuracy played an important role. The classification accuracy of above 80% when compared to previous works as cited in a comprehensive review [49] provided confidence in the results. However, for future work a thorough investigation is suggested with a combination of different numbers of electrodes and FBG sensors to identify the trend in increasing/decreasing accuracy of results in classification and prediction.

6. Recommendations

1.
The FBG optical cables can be replaced with free-space optical communications to obtain the data wirelessly and allow longer distance and duration for trial.
2.
Deep learning algorithms can be written from scratch to have lower MSE instead of using PyTorch models.
3.
Different types of foot insoles can be used to obtain a variety of plantar pressure data.
4.
Adding a temperature element in the insole to examine the effect of plantar temperature on brain activity signals.
5.
For production, the current BCI device can be substituted by a BCI device with a few channels, which will cut costs and make this process cheaper and more accessible.
6.
Since the FBG sensors are fragile, the insole should have a grooved housing for the sensors’ protection, with a hard material to preserve the pressure sensitivity. The FBG length should also be shortened to extend only to the ankle and connected to a portable light source and monitor. It will significantly enhance mobility and allow for more accurate testing.
7.
Furthermore, if the machine learning model had more data to learn from, a more robust trained model could be developed for medical and research use.
8.
Further research into miniature embeddable sensors such as FBGs will allow exploring their potential for providing a sensorial base to mimic the human sense of touch.

Author Contributions

A.M.B. was the project PI and was responsible for project planning, resource management and monitoring of the execution of critical elements of the project. K.K.Q. acted as the mentor and knowledge support for fiber optic implementation of the project. H.A. and M.A. set up the experiments and collected the data, and processed it for machine learning and visualization. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Research, Oversight and Coordination, KFUPM, Project Code SR191-027.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. Ethical review and approval were waived for this study due to the participation with consent of able bodied individuals.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to prior consent from the institution/funding agency.

Acknowledgments

The authors would like to acknowledge the lab space provided for the experiments by the Optical Communications and Sensors Lab of the Electrical Engineering Department at KFUPM.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; and in the writing of the manuscript, or in the decision to publish the results.

References

  1. Suresh, R.; Bhalla, S.; Hao, J.; Singh, C. Development of a high resolution plantar pressure monitoring pad based on fiber Bragg grating (FBG) sensors. Technol. Health Care 2015, 23, 785–794. [Google Scholar] [CrossRef] [PubMed]
  2. Alfuth, M.; Rosenbaum, D. Effects of changes in plantar sensory feedback on human gait characteristics: A systematic review. Footwear Sci. 2012, 4, 1–22. [Google Scholar] [CrossRef]
  3. Abdul Razak, A.H.; Zayegh, A.; Begg, R.K.; Wahab, Y. Foot plantar pressure measurement system: A review. Sensors 2012, 12, 9884–9912. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Ramirez-Bautista, J.A.; Hernández-Zavala, A.; Chaparro-Cárdenas, S.L.; Huerta-Ruelas, J.A. Review on plantar data analysis for disease diagnosis. Biocybern. Biomed. Eng. 2018, 38, 342–361. [Google Scholar] [CrossRef]
  5. Klöpfer-Krämer, I.; Brand, A.; Wackerle, H.; Müßig, J.; Kröger, I.; Augat, P. Gait analysis—Available platforms for outcome assessment. Injury 2020, 51, S90–S96. [Google Scholar] [CrossRef]
  6. Van De Meent, H.; Hopman, M.T.; Frölke, J.P. Walking ability and quality of life in subjects with transfemoral amputation: A comparison of osseointegration with socket prostheses. Arch. Phys. Med. Rehabil. 2013, 94, 2174–2178. [Google Scholar] [CrossRef]
  7. Ahmad, N.; Thomas, G.N.; Gill, P.; Torella, F. The prevalence of major lower limb amputation in the diabetic and non-diabetic population of England 2003–2013. Diabetes Vasc. Dis. Res. 2016, 13, 348–353. [Google Scholar] [CrossRef]
  8. Alzahrani, H.A. Diabetes-related lower extremities amputations in Saudi Arabia: The magnitude of the problem. Ann. Vasc. Dis. 2012, 5, 151–156. [Google Scholar] [CrossRef] [Green Version]
  9. Eapen, B.C.; Murphy, D.P.; Cifu, D.X. Neuroprosthetics in amputee and brain injury rehabilitation. Exp. Neurol. 2017, 287, 479–485. [Google Scholar] [CrossRef]
  10. Butt, A.M.; Qureshi, K.K. Smart lower limb prostheses with a fiber optic sensing sole: A multicomponent design approach. Sens. Mater. 2019, 31, 2965–2979. [Google Scholar] [CrossRef]
  11. Bensmaia, S.J.; Tyler, D.J.; Micera, S. Restoration of sensory information via bionic hands. Nat. Biomed. Eng. 2020, 4, 1–13. [Google Scholar] [CrossRef]
  12. Domínguez-Morales, M.J.; Luna-Perejón, F.; Miró-Amarante, L.; Hernández-Velázquez, M.; Sevillano-Ramos, J.L. Smart footwear insole for recognition of foot pronation and supination using neural networks. Appl. Sci. 2019, 9, 3970. [Google Scholar] [CrossRef] [Green Version]
  13. Lakho, R.A.; Yi-Fan, Z.; Jin-Hua, J.; Cheng-Yu, H.; Ahmed Abro, Z. A smart insole for monitoring plantar pressure based on the fiber Bragg grating sensing technique. Text. Res. J. 2019, 89, 3433–3446. [Google Scholar] [CrossRef]
  14. Taffoni, F.; Formica, D.; Saccomandi, P.; Di Pino, G.; Schena, E. Optical fiber-based MR-compatible sensors for medical applications: An overview. Sensors 2013, 13, 14105–14120. [Google Scholar] [CrossRef]
  15. Roriz, P.; Carvalho, L.; Frazão, O.; Santos, J.L.; Simões, J.A. From conventional sensors to fibre optic sensors for strain and force measurements in biomechanics applications: A review. J. Biomech. 2014, 47, 1251–1261. [Google Scholar] [CrossRef] [Green Version]
  16. Massaroni, C.; Saccomandi, P.; Schena, E. Medical Smart Textiles Based on Fiber Optic Technology: An Overview. J. Funct. Biomater. 2015, 6, 204–221. [Google Scholar] [CrossRef]
  17. Qureshi, K.K. Detection of Plantar Pressure Using an Optical Technique. In Proceedings of the 2021 7th International Conference on Engineering, Applied Sciences and Technology, Bangkok, Thailand, 1–3 April 2021; pp. 77–80. [Google Scholar] [CrossRef]
  18. Leal-Junior, A.; Theodosiou, A.; Díaz, C.; Marques, C.; Pontes, M.J.; Kalli, K.; Frizera-Neto, A. Fiber Bragg gratings in CYTOP fibers embedded in a 3D-printed flexible support for assessment of human-robot interaction forces. Materials 2018, 11, 2305. [Google Scholar] [CrossRef] [Green Version]
  19. Chereshnev, R.; Kertész-Farkas, A. HuGaDB: Human gait database for activity recognition from wearable inertial sensor networks. In Proceedings of the International Conference on Analysis of Images, Social Networks and Texts, Moscow, Russia, 15–16 October 2018; pp. 131–141. [Google Scholar] [CrossRef] [Green Version]
  20. Nazmi, N.; Abdul Rahman, M.A.; Yamamoto, S.I.; Ahmad, S.A. Walking gait event detection based on electromyography signals using artificial neural network. Biomed. Signal Processing Control. 2019, 47, 334–343. [Google Scholar] [CrossRef]
  21. Choi, A.; Jung, H.; Mun, J.H. Single inertial sensor-based neural networks to estimate COM-COP inclination angle during walking. Sensors 2019, 19, 2974. [Google Scholar] [CrossRef] [Green Version]
  22. Mudgal, S.K.; Sharma, S.K.; Chaturvedi, J.; Sharma, A. Brain computer interface advancement in neurosciences: Applications and issues. Interdiscip. Neurosurg. 2020, 20, 100694. [Google Scholar] [CrossRef]
  23. Ullah, H.; Uzair, M.; Mahmood, A.; Ullah, M.; Khan, S.D.; Cheikh, F.A. Internal Emotion Classification Using EEG Signal with Sparse Discriminative Ensemble. IEEE Access 2019, 7, 40144–40153. [Google Scholar] [CrossRef]
  24. Zhang, D.; Yao, L.; Chen, K.; Monaghan, J. A Convolutional Recurrent Attention Model for Subject-Independent EEG Signal Analysis. IEEE Signal Process. Lett. 2019, 26, 715–719. [Google Scholar] [CrossRef]
  25. Hua, X.; Ono, Y.; Peng, L.; Cheng, Y.; Wang, H. Target Detection within Nonhomogeneous Clutter Via Total Bregman Divergence-Based Matrix Information Geometry Detectors. IEEE Trans. Signal Process. 2021, 69, 4326–4340. [Google Scholar] [CrossRef]
  26. Gao, Y.; Li, H.; Himed, B. Adaptive subspace tests for multichannel signal detection in auto-regressive disturbance. IEEE Trans. Signal Process. 2018, 66, 5577–5587. [Google Scholar] [CrossRef]
  27. Dahiya, R.S.; Valle, M. Human Tactile Sensing; Springer: Cham, Switzerland, 2013; ISBN 978-94-007-0578-4. [Google Scholar]
  28. Maurer, C.; Mergner, T.; Peterka, R.J. Multisensory control of human upright stance. Exp. Brain Res. 2006, 171, 231–250. [Google Scholar] [CrossRef]
  29. Labriffe, M.; Annweiler, C.; Amirova, L.E.; Gauquelin-Koch, G.; Ter Minassian, A.; Leiber, L.M.; Beauchet, O.; Custaud, M.A.; Dinomais, M. Brain activity during mental imagery of gait versus gait-like plantar stimulation: A novel combined functional MRI paradigm to better understand cerebral gait control. Front. Hum. Neurosci. 2017, 11, 106. [Google Scholar] [CrossRef] [Green Version]
  30. Mekid, S.; Butt, A.M.; Qureshi, K. Integrity assessment under various conditions of embedded fiber optics based multi-sensing materials. Opt. Fiber Technol. 2017, 36, 334–343. [Google Scholar] [CrossRef]
  31. Mishra, V.; Singh, N.; Tiwari, U.; Kapur, P. Fiber grating sensors in medicine: Current and emerging applications. Sens. Actuators A Phys. 2011, 167, 279–290. [Google Scholar] [CrossRef]
  32. Wang, H.; Song, Q.; Zhang, L.; Liu, Y. Design on the control system of a gait rehabilitation training robot based on Brain-Computer Interface and virtual reality technology. Int. J. Adv. Robot. Syst. 2012, 9, 145. [Google Scholar] [CrossRef]
  33. Milosevic, M.; Marquez-Chin, C.; Masani, K.; Hirata, M.; Nomura, T.; Popovic, M.R.; Nakazawa, K. Why brain-controlled neuroprosthetics matter: Mechanisms underlying electrical stimulation of muscles and nerves in rehabilitation. Biomed. Eng. Online 2020, 19, 1–30. [Google Scholar] [CrossRef]
  34. Batula, A.M.; Mark, J.A.; Kim, Y.E.; Ayaz, H. Comparison of Brain Activation during Motor Imagery and Motor Movement Using fNIRS. Comput. Intell. Neurosci. 2017, 2017, 5491296. [Google Scholar] [CrossRef] [PubMed]
  35. Zhao, M.; Marino, M.; Samogin, J.; Swinnen, S.P.; Mantini, D. Hand, foot and lip representations in primary sensorimotor cortex: A high-density electroencephalography study. Sci. Rep. 2019, 9, 1–12. [Google Scholar] [CrossRef] [PubMed]
  36. Jasiewicz, B.; Klimiec, E.; Młotek, M.; Guzdek, P.; Duda, S.; Adamczyk, J.; Potaczek, T.; Piekarski, J.; Kołaszczyński, G. Quantitative analysis of foot plantar pressure during walking. Med. Sci. Monit. 2019, 25, 4916–4922. [Google Scholar] [CrossRef] [PubMed]
  37. Ren, B.; Liu, J. Design of a plantar pressure insole measuring system based on modular photoelectric pressure sensor unit. Sensors 2021, 21, 3780. [Google Scholar] [CrossRef]
  38. Tavares, C.; Domingues, M.F.; Frizera-Neto, A.; Leite, T.; Leitão, C.; Alberto, N.; Marques, C.; Radwan, A.; Rocon, E.; André, P.; et al. Gait shear and plantar pressure monitoring: A non-invasive OFS based solution for e-health architectures. Sensors 2018, 18, 1334. [Google Scholar] [CrossRef] [Green Version]
  39. Hessert, M.J.; Vyas, M.; Leach, J.; Hu, K.; Lipsitz, L.A.; Novak, V. Foot pressure distribution during walking in young and old adults. BMC Geriatr. 2005, 5, 1–8. [Google Scholar] [CrossRef] [Green Version]
  40. Mishra, A. Wavelets—A Hidden Gem for Artificial Intelligence in Seismic Interpretations. 2020. Available online: https://www.mathworks.com/content/dam/mathworks/mathworks-dot-com/company/events/conferences/matlab-energy-conference/wavelets-a-hidden-gem-for-artificial-intelligence-in-seismic-interpretation.pdf (accessed on 23 January 2022).
  41. Lahmiri, S. Comparative study of ecg signal denoising by wavelet thresholding in empirical and variational mode decomposition domains. Healthc. Technol. Lett. 2014, 1, 104–109. [Google Scholar] [CrossRef] [Green Version]
  42. Kumar, N.; Alam, K.; Siddiqi, A.H. Wavelet transform for classification of EEG signal using SVM and ANN. Biomed. Pharmacol. J. 2017, 10, 2061–2069. [Google Scholar] [CrossRef]
  43. Er, Y. The Classification of White Wine and Red Wine According to Their Physicochemical Qualities. Int. J. Intell. Syst. Appl. Eng. 2016, 4, 23–26. [Google Scholar] [CrossRef]
  44. Mishra, A.K.; Pani, S.K.; Ratha, B.K. Decision Tree Analysis on J48 and Random Forest Algorithm for Data Mining Using Breast Cancer Microarray Dataset. Available online: http://www.ijates.com/images/short_pdf/1449290433_239D.pdf (accessed on 23 January 2022).
  45. Guerrero, M.C.; Parada, J.S.; Espitia, H.E. EEG signal analysis using classification techniques: Logistic regression, artificial neural networks, support vector machines, and convolutional neural networks. Heliyon 2021, 7, e07258. [Google Scholar] [CrossRef]
  46. Agarwal, S. Data mining: Data mining concepts and techniques. In Proceedings of the 2013 International Conference on Machine Intelligence and Research Advancement, Katra, India, 21–23 December 2013; ISBN 9780769550138. [Google Scholar]
  47. Verrel, J.; Almagor, E.; Schumann, F.; Lindenberger, U.; Kühn, S. Changes in neural resting state activity in primary and higher-order motor areas induced by a short sensorimotor intervention based on the Feldenkrais method. Front. Hum. Neurosci. 2015, 9, 232. [Google Scholar] [CrossRef] [Green Version]
  48. Manie, Y.C.; Li, J.W.; Peng, P.C.; Shiu, R.K.; Chen, Y.Y.; Hsu, Y.T. Using a machine learning algorithm integrated with data de-noising techniques to optimize the multipoint sensor network. Sensors 2020, 20, 1070. [Google Scholar] [CrossRef] [Green Version]
  49. Khan, H.; Naseer, N.; Yazidi, A.; Eide, P.K.; Hassan, H.W.; Mirtaheri, P. Analysis of Human Gait Using Hybrid EEG-fNIRS-Based BCI System: A Review. Front. Hum. Neurosci. 2021, 14, 605. [Google Scholar] [CrossRef]
Figure 1. A typical human gait cycle.
Figure 1. A typical human gait cycle.
Sensors 22 03085 g001
Figure 2. Working principle of FBG sensors and associated inspection scheme.
Figure 2. Working principle of FBG sensors and associated inspection scheme.
Sensors 22 03085 g002
Figure 3. (a) 10–20 system for EEG monitoring on the scalp; (b) 16-electrode positioning by the Open BCI headset.
Figure 3. (a) 10–20 system for EEG monitoring on the scalp; (b) 16-electrode positioning by the Open BCI headset.
Sensors 22 03085 g003
Figure 4. Open BCI Ultracortex Mark IV Headset with on-board biosensing circuit.
Figure 4. Open BCI Ultracortex Mark IV Headset with on-board biosensing circuit.
Sensors 22 03085 g004
Figure 5. Comparison of brain activity monitored during hand, foot, and lip movement [35].
Figure 5. Comparison of brain activity monitored during hand, foot, and lip movement [35].
Sensors 22 03085 g005
Figure 6. Foot plantar area divided into different regions.
Figure 6. Foot plantar area divided into different regions.
Sensors 22 03085 g006
Figure 7. (a) Sensor location identified with MATLAB code; (b) foot pressure identified with markings of the ink pad.
Figure 7. (a) Sensor location identified with MATLAB code; (b) foot pressure identified with markings of the ink pad.
Sensors 22 03085 g007
Figure 8. (a) Schematics of the experimental setup; (b) experimental setup at the lab.
Figure 8. (a) Schematics of the experimental setup; (b) experimental setup at the lab.
Sensors 22 03085 g008
Figure 9. (a) Three FBGs installed on the insole; (b) sandal prepared for multi-user use; (c) a participant walking while wearing the BCI headset and the FBG installed insole.
Figure 9. (a) Three FBGs installed on the insole; (b) sandal prepared for multi-user use; (c) a participant walking while wearing the BCI headset and the FBG installed insole.
Sensors 22 03085 g009
Figure 10. (a) Optical interrogator GUI module; (b) open BCI system control panel GUI 16 channel collection.
Figure 10. (a) Optical interrogator GUI module; (b) open BCI system control panel GUI 16 channel collection.
Sensors 22 03085 g010aSensors 22 03085 g010b
Figure 11. Schematic for part I of the experiment (BCI Classification).
Figure 11. Schematic for part I of the experiment (BCI Classification).
Sensors 22 03085 g011
Figure 12. Signal flow from data collection to classification.
Figure 12. Signal flow from data collection to classification.
Sensors 22 03085 g012
Figure 13. Raw EEG data of the 16 channels over 14 s during walking of a single participant.
Figure 13. Raw EEG data of the 16 channels over 14 s during walking of a single participant.
Sensors 22 03085 g013
Figure 14. (a) Normalized channels using the z-score method; (b) detrended signal by removing the mean; and (c) low-passed signals at 60 Hz.
Figure 14. (a) Normalized channels using the z-score method; (b) detrended signal by removing the mean; and (c) low-passed signals at 60 Hz.
Sensors 22 03085 g014aSensors 22 03085 g014b
Figure 15. 4-level wavelet decomposition using sym4 wavelet.
Figure 15. 4-level wavelet decomposition using sym4 wavelet.
Sensors 22 03085 g015
Figure 16. Wavelet decomposition with both approximation and level 1 detail removed.
Figure 16. Wavelet decomposition with both approximation and level 1 detail removed.
Sensors 22 03085 g016
Figure 17. Fourier transform comparison of channel 16 (a) before; (b) after pre-processing; and (c) after wavelet processing.
Figure 17. Fourier transform comparison of channel 16 (a) before; (b) after pre-processing; and (c) after wavelet processing.
Sensors 22 03085 g017
Figure 18. Confusion Matrix.
Figure 18. Confusion Matrix.
Sensors 22 03085 g018
Figure 19. BCI Channels (EEGs) and Classification Output.
Figure 19. BCI Channels (EEGs) and Classification Output.
Sensors 22 03085 g019
Figure 20. Schematic for part II of the experiment with FBG data collection (BCI Prediction).
Figure 20. Schematic for part II of the experiment with FBG data collection (BCI Prediction).
Sensors 22 03085 g020
Figure 21. (a) Walking (b) standing (c) sitting postures of the participants while wearing equipment.
Figure 21. (a) Walking (b) standing (c) sitting postures of the participants while wearing equipment.
Sensors 22 03085 g021
Figure 22. Signal flow diagram for the FBG-BCI machine learning model.
Figure 22. Signal flow diagram for the FBG-BCI machine learning model.
Sensors 22 03085 g022
Figure 23. Standing posture wavelength-shifts for 3 trials of a single participant (before normalization).
Figure 23. Standing posture wavelength-shifts for 3 trials of a single participant (before normalization).
Sensors 22 03085 g023
Figure 24. Sitting posture normalized shift for 2 trials of a single participant.
Figure 24. Sitting posture normalized shift for 2 trials of a single participant.
Sensors 22 03085 g024
Figure 25. Standing posture normalized shift for 2 trials of a single participant.
Figure 25. Standing posture normalized shift for 2 trials of a single participant.
Sensors 22 03085 g025
Figure 26. Walking posture normalized shift for 2 trials of a single participant.
Figure 26. Walking posture normalized shift for 2 trials of a single participant.
Sensors 22 03085 g026
Figure 27. Schematics for the RNN, LSTM, and GRU machine learning models.
Figure 27. Schematics for the RNN, LSTM, and GRU machine learning models.
Sensors 22 03085 g027
Figure 28. Channel 6: Training and validation error using (a) LSTM, (b) RNN and (c) GRU models.
Figure 28. Channel 6: Training and validation error using (a) LSTM, (b) RNN and (c) GRU models.
Sensors 22 03085 g028
Figure 29. Channel 9: Training and validation error using (a) LSTM, (b) RNN and (c) GRU models.
Figure 29. Channel 9: Training and validation error using (a) LSTM, (b) RNN and (c) GRU models.
Sensors 22 03085 g029
Figure 30. Channel 2 prediction based on the three FBG inputs for the LSTM model.
Figure 30. Channel 2 prediction based on the three FBG inputs for the LSTM model.
Sensors 22 03085 g030
Figure 31. (a) A close-up of Figure 30 from 1.90 to 2.25 s; (b) close-up of Figure 30 from 3.4 to 3.8 s.
Figure 31. (a) A close-up of Figure 30 from 1.90 to 2.25 s; (b) close-up of Figure 30 from 3.4 to 3.8 s.
Sensors 22 03085 g031
Table 1. Information of participants for the study.
Table 1. Information of participants for the study.
ParticipantAgeWeight (kg)Height (m)Foot Size (EU)
1231021.7342.5
224671.7742
323721.8043
423701.7041
Table 2. Accuracy results of channel classification.
Table 2. Accuracy results of channel classification.
SAMPLEDPROCESSED
K-NNSVMLogistic RegressionNaïve BayesK-NNSVMLogistic RegressionNaïve Bayes
CHTRVALTBTTRVALTBTTRVALTBTTRVALTBTTRVALTBTTRVALTBTTRVALTBTTRVALTBT
110.810.750.80.870.830.750.810.590.330.370.720.730.580.510.510.330.3610.410.50.5610.380.250.290.810.560.670.69
20.740.740.920.920.830.780.670.6110.550.250.240.740.720.660.6110.450.170.2210.430.250.2910.450.250.250.680.550.580.6
310.680.50.540.780.780.420.410.490.420.440.70.660.580.6210.510.50.5810.440.420.510.380.250.330.660.510.580.6
410.590.750.780.980.730.750.6910.430.330.320.550.530.50.5110.490.670.6710.320.670.7210.250.750.780.680.620.750.67
50.810.770.830.8110.280.50.5410.490.330.320.680.680.670.6710.380.580.5310.360.670.6910.320.750.760.620.470.750.75
610.530.830.8510.420.420.510.450.330.360.510.510.670.610.510.330.4210.470.50.5810.40.420.470.740.550.830.85
710.620.580.5910.360.330.3710.490.330.330.640.60.670.6410.450.330.4210.440.250.2910.440.330.370.830.550.330.31
810.510.670.6710.420.420.4710.380.250.280.490.490.330.3210.490.250.3310.340.420.4710.320.420.50.830.640.50.48
910.60.920.9310.330.250.310.520.580.560.550.530.580.5310.490.250.2410.320.250.2810.290.160.190.740.530.920.92
1010.540.580.5910.250.330.4210.620.330.390.550.560.580.6110.420.250.3310.30.250.3310.270.330.40.790.590.420.48
1110.540.50.4310.330.50.5710.320.330.360.550.540.670.6810.470.420.510.450.580.6510.290.420.460.70.530.920.92
120.740.70.50.5110.470.50.5810.290.330.360.570.60.420.410.560.50.5810.510.50.5810.420.50.580.850.70.830.87
1310.470.420.4710.330.250.3310.40.330.390.510.480.330.3310.450.50.5710.360.420.4710.340.330.390.550.430.420.43
1410.550.670.6710.280.330.410.30.080.070.510.380.330.3310.430.420.4810.230.330.390.960.280.580.670.830.450.330.36
1510.690.670.6310.420.330.4210.420.330.330.570.550.420.410.430.580.6610.320.50.5810.280.420.50.80.490.50.51
1610.590.50.5110.440.420.4710.480.330.330.320.730.420.3310.380.420.4710.410.420.4710.340.250.30.770.510.50.54
Avg0.960.6210.6620.6690.970.470.450.4910.450.330.340.570.580.530.5110.460.410.4610.380.430.4910.340.40.450.740.540.610.62
TR: Training accuracy; VAL: Validation Accuracy; T: Test Accuracy; BT: TEST Balanced Accuracy.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Butt, A.M.; Alsaffar, H.; Alshareef, M.; Qureshi, K.K. AI Prediction of Brain Signals for Human Gait Using BCI Device and FBG Based Sensorial Platform for Plantar Pressure Measurements. Sensors 2022, 22, 3085. https://doi.org/10.3390/s22083085

AMA Style

Butt AM, Alsaffar H, Alshareef M, Qureshi KK. AI Prediction of Brain Signals for Human Gait Using BCI Device and FBG Based Sensorial Platform for Plantar Pressure Measurements. Sensors. 2022; 22(8):3085. https://doi.org/10.3390/s22083085

Chicago/Turabian Style

Butt, Asad Muhammad, Hassan Alsaffar, Muhannad Alshareef, and Khurram Karim Qureshi. 2022. "AI Prediction of Brain Signals for Human Gait Using BCI Device and FBG Based Sensorial Platform for Plantar Pressure Measurements" Sensors 22, no. 8: 3085. https://doi.org/10.3390/s22083085

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop