Next Article in Journal
Privacy Preservation Models for Third-Party Auditor over Cloud Computing: A Survey
Next Article in Special Issue
A Transfer Learning Approach for Lumbar Spine Disc State Classification
Previous Article in Journal
Stock Market Prediction Using Machine Learning Techniques: A Decade Survey on Methodologies, Recent Developments, and Future Directions
Previous Article in Special Issue
Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes

1
Control and Systems Engineering Department, University of Technology-Iraq, Baghdad 00964, Iraq
2
College of Technical Engineering, The Islamic University, Najaf 54001, Iraq
3
School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
4
College of Computer Science and Information Technology, University of Sumer, Thi Qar 64005, Iraq
5
Department of Computer Science, University of Jaén, 23071 Jaén, Spain
6
Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA
*
Authors to whom correspondence should be addressed.
Electronics 2021, 10(21), 2719; https://doi.org/10.3390/electronics10212719
Submission received: 1 October 2021 / Revised: 4 November 2021 / Accepted: 5 November 2021 / Published: 8 November 2021
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)

Abstract

:
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert user to the type-1 Diabetes class once a certain critical level is surpassed. This can lead the body of the patient to work at critical levels until the medicine is taken in order to reduce the glucose level, consequently increasing the risk of causing considerable health damages in case of the intake is delayed. To overcome the latter, a new approach based on cutting-edge software and hardware technologies is proposed in this paper. Specifically, an artificial intelligence deep learning (DL) model is proposed to predict glucose levels in 30 min horizons. Moreover, Cloud computing and IoT technologies are considered to implement the prediction model and combine it with the existing wearable CGM model to provide the patients with the prediction of future glucose levels. Among the many DL methods in the state-of-the-art (SoTA) have been considered a cascaded RNN-RBM DL model based on both recurrent neural networks (RNNs) and restricted Boltzmann machines (RBM) due to their superior properties regarding improved prediction accuracy. From the conducted experimental results, it has been shown that the proposed Cloud&DL-based wearable approach achieves an average accuracy value of 15.589 in terms of RMSE, then outperforms similar existing blood glucose prediction methods in the SoTA.

1. Introduction

Nowadays, diabetes is considered one of the common diseases that is rapidly increasing in the world. It naturally represents an important health problem in the world and the World Health Organization (WHO) is also encouraging science to work along with this objective. Since 1965, many contributions have been published about promoting specific guidelines for the diagnosis, monitoring, and treatment of diabetes [1]. Regarding diabetes conditions, the human body is unable to create enough insulin to manage blood sugar levels, or the insulin produced is insufficiently utilized. It can also cause various diseases such as diabetes, kidney disease, heart disease, nerve damage, blindness, and damage of blood vessels [2].
In the last few years, one of the most relevant systems used to help in type-1 diabetes (T1D) is continuous glucose monitoring (CGM) systems which allow patients to continuously monitor the blood glucose levels and react based on the current blood glucose levels [3]. Predicting the future blood glucose level can be an early-prediction tool in order to help patients to manage their insulin distribution and protect them from potentially severe damage. However, the complex behavior of blood glucose levels which depends on many factors such as sleeping patterns, recent insulin injections, and carbohydrate intake makes the prediction phase for short terms a complex and challenging task [4]. Thus, it is necessary to adopt those technologies that provide improved alternatives when dealing with these health-care problems.
Recently, artificial intelligence and new learning techniques in the field have become promising approaches for improving the health care of patients. The prediction of glucose levels in the blood can be observed as a time series problem. Therefore, a sequence of present and past glucose levels is used to predict the values of the next levels in the near future. Usually, machine learning (ML) methods are preferred due to they provide more accurate and fast results requiring less computational cost. Deep learning (DL) methods belong to ML, and they allow to increase the predictive feature due to their inherent ability to both combine data from various sources and manage large amounts of data [5].
In the last few years, IoT and Cloud computing paradigms have contributed to the field of health care with new wearable technologies. These cutting-edge developments are crucial for the direct increase of the comfort of the life of the patients dealing with the disease. In particular, instant glucose, glucose trend, and direction information are three relevant parameters that can be obtained by using a small sensor. Such a device requires a low power consumption and it is worn on the arm of the patient, which replaces the traditional invasive blood test. More importantly, these types of IoT devices lack processing power, thus the obtained data can be processed and analyzed in a more powerful computing device placed in the Cloud [6].
In this paper, it is proposed a new wearable CGM system to predict the blood glucose level from glucose level history using a DL method run in the Cloud. Specifically, recurrent neural networks (RNNs) are proven to have the ability to capture temporal auto-correlation features in data, while restricted Boltzmann machines (RBM) have the ability to delineate complex distributions in the data. Therefore, our Cloud&DL-based wearable approach makes use of a DL model designed from the previous learning strategies, and a cascaded RNN-RBM method is accordingly considered. Finally, the new health care system is aimed at time series prediction of the glucose level in the blood for 30 min horizon to provide greater precision than that given by other methods from the state-of-the-art (SoTA).
The contributions can be summarized as follows;
  • A DL-based blood glucose level perdition model is used together with a wearable CGM device to provide T1D prediction of near-future blood glucose levels.
  • A cascaded DL hybrid model based on RNN and RBM to improve the accuracy against the current techniques in the SoTA.
  • A high processing power tool based on using a Cloud computing architecture for running the proposed DL models together with using the lower processing power of a wearable CGM device.
The structure of the paper is organized as follows. A brief summary of the SToA is provided in Section 2. Next, Section 3 and Section 4 are devoted to, respectively, describing the proposed approach and providing the details of the system. Section 5 is aimed at performing the experimental evaluation of the proposal. Finally, those more relevant conclusions are introduced in Section 6.

2. Review of the State-of-the-Art

This section is aimed at briefly introducing those more relevant contributions in the field. Specifically, blood glucose level prediction is considered an important problem and received considerable attention by researchers to improve the prediction accuracy. Artificial intelligence approaches such as DL and ML methods are highly considered by the researchers for automatic and accurate prediction of glucose levels in the blood for T1D patients.
In [7], to predict levels of blood glucose 60 min into the future a deep recurrent neural network RNN model is presented. A parameterized univariate Gaussian output distribution is used for estimating the uncertainty in the prediction. They acquire an RMSE of 18.867 using blood glucose level measurements of six people with T1D.
A DL model to predict glucose concentration is presented in [8] using dilated recurrent neural network (DRNN) to achieve a 30-min prediction of the future. The data used are obtained from two sources, both simulator (UVA/Padova T1D simulator) and clinical trials (OhioT1DM). Different inputs such as time index, historical blood glucose, meal intake, and insulin bolus are provided to the network to acquire the glucose level prediction. The result reveals an RMSE of 27.4 mg/dL for the clinical dataset.
ARTiDe presented in [9], which is a model for forecasting blood glucose levels based on a jump neural network JNN to handle both the linear and nonlinear components of inputs data. The model includes temporal delays for the input signals with auto-regressive feedbacks. A private, as well as a public dataset, were used for the predictions of glucose levels of 15, 20, and 30 min in the future the result shows an accuracy of 18.4 RMSE.
A hybrid cascaded DL model is presented in [10] to predict glucose levels in the blood for up to 60 min. The model is based on a single layer from long-short-term memory (LSTM) followed by a bidirectional LSTM layer. The model is trained using different datasets obtained from T1D real patients and simulators. The model has achieved a prediction result with an accuracy of 21.747 in terms of RMSE.
A deep neural network DNN approach is presented in [11] for the state prediction of blood glucose for 30 min in the future in form of (hypoglycemic, euglycemic, and hyperglycemic). The prediction error-grid analysis method is considered to improve the model regarding accuracy. the training dataset is obtained from DirecNet Central Laboratory for 25 T1D patient’s time series data. the model has achieved an average of 93% in respect of prediction accuracy.
In [12], a framework called GluNet is presented for forecasting blood glucose levels for T1D patients. The framework is used for CGM forecasting for 30 to 60 min in the future. A deep convolution neural network CNN with label transform/recover method is trained using datasets obtained from Silico virtual adult, Virtual adolescent, and Clinical adult. The model achieved glucose forecasting results of 19.2 RMSE in terms of accuracy.
An LSTM DL method is considered in [13] to model the behavior of blood glucose for T1D patients. The model predicts the blood glucose for 30 to 60 min ahead. A dataset of 5 actual patients with 200 data points is used for training and evaluating the model. The model archived an accuracy of 37.8 regarding RMSE.
In [14], LSTM based DL method is used to predict Blood Glucose Dynamics in respect of three classes (High, Normal, and Low). The model trained with 112 patient data and obtained classification results with an average 86.7% of accuracy.
A comparative study is presented in [15] to evaluate different types of ML and DL methods for glucose level prediction in blood for 30 min ahead. The study considered LSTM DL methods against the classical auto-regression with exogenous inputs ARX. OhioT1DM dataset is used to train and test the performance of the models. Based on the evaluation experiments the ARX method achieved the lowest RMSE of 19.53 compared to the VanillaLSTM DL model which obtained an RMSE of 19.58.
For prediction and forecasting time series data the authors in [16] presented and used a hybrid deep learning model based on a combination of recurrent neural network and restricted Boltzmann machine RNN-RBM. This model is used for different types of time series prediction and forecasting applications such as modeling temporal dependencies in high-dimensional sequence [16], forecasting disasters for the mobile network [17], predicting stock market trends [18], transportation network congestion evolution prediction of network congestion evolution [19], and wastewater treatment plants anomaly prediction [20]. The presented method shows its ability to capture both spatial and temporal dependencies in the data which distinguishes it from other models. RBM is a generative stochastic neural network that learns the probabilistic distribution of the data. Therefore, RBMs have the ability for pattern completion by generating samples based on data distribution. This feature of RBM can be utilized for temporal time series prediction in RNN-RBM architecture where RBM biases are permitted to be adjusted by the RNN to transmit temporal information and RBM parameters are kept constant as a prior for the data distribution.
Different models have their own set of benefits and drawbacks and are better suited to different sorts of tasks and issues, as well as different datasets. In this work, the possible benefits of using the RNN-RBM model to achieve more accurate (compared to SoTA) time-series blood glucose levels prediction will be evaluated.

3. The Proposed Approach

This section presents the theoretical framework of the proposed system.

3.1. Restricted Boltzmann Machines (RBM)

Restricted Boltzmann machines (RBM) have emerged as one of the most popular probabilistic learning methods. Combined with advances in learning theory, RBMs have expanded their applicability to a variety of tasks such as contrast divergence, persistent and parallel coupling. While successful, most of these models have been used not in the context of relational data, but often with a flat feature representation (vectors, matrices, tensors) [21].
RBM structure shown in Figure 1 has two layers, the first layer is the input layer and the second layer is defined as the hidden layer. Circles have a structure similar to neurons and consist of connections called nodes. Nodes provide connections between layers, but there are no connections between nodes in the same layer.
There is no intra-layer communication. Each node in the layer makes a calculation by making random decisions. All the nodes in the input node layer receive a feature that is lower-level than the item in the dataset that needs to be learned. The x value is multiplied by a weight and added to the bias value at the first node of the hidden layer. The output of the node, or the strength of the signal flowing through it, is produced by feeding the result of these two processes into an activation function that, given input x, creates the output of the node, or the strength of the signal traveling through it. When the combination of the inputs at the hidden node is reviewed, the x values are multiplied by the w weight value, these values are added together and the bias value is added, and the calculated value is passed through an activation function to arrive at the result of the node. The x value for each of the hidden nodes is multiplied by the weight W. Thus, the three weights of each input x form the sum of the twelve weights. Each hidden node consists of four inputs multiplied by their respective weights. The sum of these values is added to the deviation value and the calculated result value is used with the activation algorithm that produces the result for the hidden node [22].
Equation (1) shows a general form of the energy function for the pair of visible and hidden vectors v | h the weight matrix W is associated with the connection between v and h of a KBM [23].
E ( v , h ) = a T v b T v T W h
here; a and b are the visible and hidden units bias weights. The probability distributions of v and h are expressed with regard to energy function E(v,h) as in Equation (2).
P ( v , h ) = 1 Z e E ( v , h )  
here; Z is the normalizing constant and is given in Equation (3).
Z =   v , h e E ( v , h )
where v is the probability of vector; It is equal to the sum of Equation (2) over hidden layers [P(v,h)] If we take the derivative of the log-probability of the training data with respect to W we get Equation (4).
n = 1 N log ( P ( v n ) ) w i j = v i | h j d a t a v i | h j m o d e l
here, v i | h j   is the model distribution or the expected values of the data. Equation (5) is the learning rule for the weights of the network on log-probability-based training data.
Δ w i j = ε ( v i | h j d a t a v i | h j m o d e l )
here, ε is the learning rate.
In addition, activations of hidden or visible units can be conditionally dependent on visible or hidden units; The conditional probability of h depending on v is defined as in Equations (6)–(9).
P ( h | v ) = j ( h j | v ) h j { 1 , 0 }
P ( h j = 1 | v ) = σ ( b j + i v i W i j )
P ( v i = 1 | h ) = σ ( a i + j h j W i j )
σ ( x ) = ( 1 + e x ) 1

3.2. Recurrent Neural Networks

RNNs are a type of artificial neural network in which the connections between units form a directed loop. RNNs are feedforward neural networks with edges that expand along with adjacent time steps, introducing the idea of time to the traditional neural network paradigm [24]. The difference between RNN and the feedforward networks, NNs do not have traditional edge-to-edge loops. However, edges connecting adjacent time steps, called repeating edges, can form loops, including loops of length that are self-connecting from a node [25]. Nodes with recurring edges at time t are taken their input from the hidden node values h t 1   in the network prior state and the present data point xt as shown in Equation (10). The output y t   is calculated at time t by giving the values h t of the hidden node at time t as shown in Equation (11). The input   x t 1   at time t − 1 can affect the output at time t through recursive connections to y t   and beyond [26].
h t = σ ( W h x x t + W h h h t + b h )
y t = s o r t m a x ( W h x h t + b y )  
Here, W h x is input and the hidden layer conventional weights. W h h   is the recurring weights matrix of the hidden layer and itself in adjacent time steps. The bias parameters vectors are b h and b y . An illustration of a simple RNN structure is given in Figure 2.

3.3. RNN-RBM Model

RNN and RBM DL methods can predict a temporal sequence. Combining each one of them into a deep architecture called RNN-RBM can take advantage of both models for capturing complex temporal dependencies in the data. This architecture is obtained by combining and including RBM at every time step with RNN, therefore the RBM model parameters are determined from an RNN as shown in Figure 3 [19].
The hidden units of RNN layers at time t are connected to each other in chain configuration and connected with visible layers of RBM as shown in Equation (12) [27].
h ^ t = σ ( W 2 v t + W 3 h ^ t 1 + b h ^ )  
where h ^ t   , W 2 . W 3   and b h ^   are the single-layer RNN-RBM parameters.

3.4. Message Queuing Telemetry Transport

Message queuing telemetry transport MQTT is an application layer protocol that runs on top of the TCP/IP transport layer protocol for data transfer. It is a protocol suitable for resource-constrained devices and was developed by Andy Stanford-Clark and Arlen Nipper in 1999. It is a lightweight and easy-to-use protocol, which provides a convenient transfer facility for communicating in resource-constrained systems such as the IoT. The design intent of MQTT is to provide reliable message transmission in environments such as low bandwidth and unreliable networks for resource-constrained systems. The delivery of data is done using client-server and broadcast-subscriber mechanisms [28].

4. The Proposed Architecture

The proposed system architecture for cloud-based blood glucose level perdition IoT device is shown in Figure 4. The system consists of three main parts IoT device with a blood glucose sensor and display, an MQTT broker to provide the connection between the IoT device and the cloud environment, and cloud-based perdition RNN-RBM DL model. The IoT device includes a glucose level sensor to measure and monitor the patient’s blood glucose level values and send it to the cloud environment via Wi-Fi using the MQTT protocol. The cloud environment contains a database to store the patient’s glucose levels for the 20 data points in the last 100 min which then feed to the DL model used to predict the glucose level up to 30 min horizon into the future. The predicted value of glucose level is sent back to the IoT device to display to the patient.

5. Experimental Evaluation

The dataset used to train the proposed RNN-RBM DL model for blood glucose level prediction is obtained from Diabetes Research in Children Network (DirecNet). The dataset contains historical data of blood glucose levels reordered in 5 min intervals for 110 T1D patients with ages between 7–17. The data were collected and obtained based on patients’ and parents’ approval in 3 months period and in separated sessions to ensure patient comfort and safety [29]. Table 1 summarizes information about the considered patients, where Record ID represents the sample number for taking glucose level measurements for each patient with a specific Patient ID.
In this work, a subset of randomly selected 10 patients with glucose level data points more than 1000 is considered as shown in Table 2 below.
For each one of the selected patients, 80% of the glucose level values are considered for training the proposed RNN-RBM DL model and the remaining 20% is considered for testing and evaluating the model performance.
To avoid model overfitting issues a 10- fold Blocked cross-validation method [30] is considered for performance evaluation of the model.
The hyperparameters selected of the proposed RNN-RBM DL method for blood glucose level prediction are shown in Table 3.
The best values of the hyperparameters are selected through an experimental method based on the search space value range shown in Table 3.
In our evaluation experiments, the presented model is trained using batch learning data are sent to the network as batches determined by a sliding window.
The DL model is trained on the cloud environment using the training data and evaluated using the test data for each one of the selected 10 patients mentioned above. Root mean square error RMSE and mean absolute error MAE metrics are used to evaluate the performance of the model, RMSE and MAE can be calculated based on Equations (13) and (14), respectively [31].
R M S E = i = 1 n ( p r e d i c t e d i a c t u a l i ) 2 n
M A E = 1 n i = 1 n | p r e d i c t e d i a c t u a l i |
where n represents the number of samples.
Figure 5 shows the RMSE for the proposed DL model during training and validation in different epochs.
Table 4 shows the evaluation results of the RNN-RBM DL model for glucose blood level perdition compared to the regular RNN method using the data of the selected 10 patients.
Based on the results shown in Table 4 the proposed RNN-RBM achieved an average RMSE value of 15.589 and 12.926 of MAE for all 10 patients’ evaluation data, where the RNN method achieved an average of 18.618 of RMSE and 13.319 of MAE. Therefore RNN-RBM has outperformed the regular RNN model for glucose level prediction in terms of error since it has 16.27% less error.
Figure 6 shows the predictions from the proposed RNN-RBM model with respect to the actual value.
Table 5 shows a comparison between the presented RNN-RBM method for blood glucose level prediction and similar related SoTA works with regard to RMSE accuracy.
The comparison results in Table 5 show that the proposed RNN-RBM method achieves the lowest value of RMSE and is considered the best method in the SoTA according to accuracy.

6. Conclusions

In this paper, a cloud-based DL model based on cascaded RNN and RBM methods is proposed for wearable IoT continuous glucose monitoring CGM devices. The proposed Cloud&DL-based model is used to predict the glucose levels in the blood for 30 min horizon in 5 min intervals ahead based on the prior 20 samples of blood glucose levels of the patient. Therefore, the model is able to predict the fluctuations in glucose level that occurred in 5 min intervals which are sufficient for the patient to take any action in case of perdition a hyperglycemia condition.
The DL model is implemented and trained using batch learning in the considered Cloud computing environment which provides many features such as virtualization and resource sharing. Cloud computing also provides machine learning specific services such as Machine Learning as a Service (MLaaS) which is specifically designed for machine learning perdition applications to provide services to multiple users simultaneously via using multiple virtual machines.
For the actual application, the model can be initially pre-trained using batch learning and since the model is implemented in a cloud environment online learning can be used for continuously updating the model to learn from patient data.
Lightweight MQTT communication protocol is considered for data exchanging between the low-power CGM IoT device and the cloud-based prediction model. Real data of 10 T1D patients were used to train and evaluate the proposed DL RNN-RBM method. From the experimental results, it was demonstrated that the RNN-RBM DL model acquired the highest performance regarding accuracy prediction which outperformed the regular RNN methods and similar ones in the state-of-the-art. Since the model is implemented in a Cloud computing environment it required a high QoS service provider with a high level of security for patients’ information security and privacy.
For future research, the proposed architecture can extend to be used with insulin delivery devices to provide the patients with the required amount of insulin based on the predicted future blood glucose levels. In this case, a second deep learning model can be deployed to estimate the required amount of insulin that should provide to the patient based on the future values of blood glucose levels obtained from the presented model in this work. The estimated amount of insulin then will be supplied to the patient via the insulin delivery device. Furthermore, optimization methods can be used for finding the optimal models hyperparameters instead of the traditional experimental method.

Author Contributions

Conceptualization, A.R.N., L.A. and A.J.H.; methodology, A.R.N. and A.J.H.; software, A.R.N., M.A.F.; validation, A.R.N., A.M.H., L.A., J.S., Y.D. and A.J.H.; formal analysis, A.R.N., L.A.; investigation, A.R.N.; resources, A.R.N.; data curation, A.R.N.; writing—original draft preparation, A.R.N., A.J.H. and L.A.; writing—review and editing, A.R.N., A.M.H., A.J.H., A.A., L.A., M.A.F., J.S. and Y.D.; visualization, A.R.N. and M.A.F.; supervision, A.J.H.; project administration, A.J.H., J.S. and Y.D.; funding acquisition, A.J.H., A.A., L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roglic, G. WHO Global report on diabetes: A summary. Int. J. Noncommunicable Dis. 2016, 1, 3. [Google Scholar] [CrossRef]
  2. Association, A.D. Diagnosis and classification of diabetes mellitus. Diabetes Care 2014, 37, S81–S90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Fadhel, S.F.; Raafat, S.M. H∞ Loop Shaping Robust Postprandial Glucose Control for Type 1 Diabetes. Eng. Technol. J. 2021, 39, 268–279. [Google Scholar] [CrossRef]
  4. Bilous, R.; Donnelly, R.; Idris, I. Handbook of Diabetes; John Wiley & Sons: New York, NY, USA, 2021. [Google Scholar]
  5. Wang, Q.; Molenaar, P.; Harsh, S.; Freeman, K.; Xie, J.; Gold, C.; Rovine, M.; Ulbrecht, J. Personalized state-space modeling of glucose dynamics for type 1 diabetes using continuously monitored glucose, insulin dose, and meal intake: An extended Kalman filter approach. J. Diabetes Sci. Technol. 2014, 8, 331–345. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bhat, G.M.; Bhat, N.G. A novel IoT based framework for blood glucose examination. In Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 15–16 December 2017; pp. 205–207. [Google Scholar]
  7. Martinsson, J.; Schliep, A.; Eliasson, B.; Mogren, O. Blood glucose prediction with variance estimation using recurrent neural networks. J. Healthc. Inform. Res. 2020, 4, 1–18. [Google Scholar] [CrossRef] [Green Version]
  8. Zhu, T.; Li, K.; Chen, J.; Herrero, P.; Georgiou, P. Dilated recurrent neural networks for glucose forecasting in type 1 diabetes. J. Healthc. Inform. Res. 2020, 4, 308–324. [Google Scholar] [CrossRef] [Green Version]
  9. D’Antoni, F.; Merone, M.; Piemonte, V.; Iannello, G.; Soda, P. Auto-Regressive Time Delayed jump neural network for blood glucose levels forecasting. Knowl.-Based Syst. 2020, 203, 106134. [Google Scholar] [CrossRef]
  10. Sun, Q.; Jankovic, M.V.; Bally, L.; Mougiakakou, S.G. Predicting blood glucose with an lstm and bi-lstm based deep neural network. In Proceedings of the 2018 14th Symposium on Neural Networks and Applications (NEUREL), Belgrade, Serbia, 15–16 December 2018; pp. 1–5. [Google Scholar]
  11. Mhaskar, H.N.; Pereverzyev, S.V.; van der Walt, M.D. A deep learning approach to diabetic blood glucose prediction. Front. Appl. Math. Stat. 2017, 3, 14. [Google Scholar] [CrossRef] [Green Version]
  12. Li, K.; Liu, C.; Zhu, T.; Herrero, P.; Georgiou, P. GluNet: A deep learning framework for accurate glucose forecasting. IEEE J. Biomed. Health Inform. 2019, 24, 414–423. [Google Scholar] [CrossRef]
  13. Mirshekarian, S.; Bunescu, R.; Marling, C.; Schwartz, F. Using LSTMs to learn physiological models of blood glucose behavior. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 2887–2891. [Google Scholar]
  14. Gu, W.; Zhou, Z.; Zhou, Y.; He, M.; Zou, H.; Zhang, L. Predicting blood glucose dynamics with multi-time-series deep learning. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, Delft, The Netherlands, 6–8 November 2017; pp. 1–2. [Google Scholar]
  15. Xie, J.; Wang, Q. Benchmarking Machine Learning Algorithms on Blood Glucose Prediction for Type I Diabetes in Comparison with Classical Time-Series Models. IEEE Trans. Biomed. Eng. 2020, 67, 3101–3124. [Google Scholar] [CrossRef]
  16. Boulanger-Lewandowski, N.; Bengio, Y.; Vincent, P. Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXiv 2012, arXiv:1206.6392 2012. [Google Scholar]
  17. Wang, X.; Jiang, F.; Zhong, L.; Ji, Y.; Yamada, S.; Takano, K.; Xue, G. Intelligent post-disaster networking by exploiting crowd big data. IEEE Netw. 2020, 34, 49–55. [Google Scholar] [CrossRef]
  18. Yoshihara, A.; Fujikawa, K.; Seki, K.; Uehara, K. Predicting stock market trends by recurrent deep neural networks. In Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Gold Coast, QLD, Australia, 1–5 December 2014; pp. 759–769. [Google Scholar]
  19. Ma, X.; Yu, H.; Wang, Y.; Wang, Y. Large-scale transportation network congestion evolution prediction using deep learning theory. PLoS ONE 2015, 10, e0119044. [Google Scholar] [CrossRef]
  20. Dairi, A.; Cheng, T.; Harrou, F.; Sun, Y.; Leiknes, T. Deep learning approach for sustainable WWTP operation: A case study on data-driven influent conditions monitoring. Sustain. Cities Soc. 2019, 50, 101670. [Google Scholar] [CrossRef]
  21. Cifuentes, J.; Yao, Y.; Yan, M.; Zheng, B. Blood transfusion prediction using restricted Boltzmann machines. Comput. Methods Biomech. Biomed. Eng. 2020, 23, 510–517. [Google Scholar] [CrossRef]
  22. Timirgazin, M.; Arzhnikov, A. Predicting long-and short-range order with restricted Boltzmann machine. AIP Adv. 2021, 11, 015027. [Google Scholar] [CrossRef]
  23. de Souza, R.W.; Silva, D.S.; Passos, L.A.; Roder, M.; Santana, M.C.; Pinheiro, P.R.; de Albuquerque, V.H.C. Computer-assisted Parkinson’s disease diagnosis using fuzzy optimum-path forest and Restricted Boltzmann Machines. Comput. Biol. Med. 2021, 131, 104260. [Google Scholar] [CrossRef] [PubMed]
  24. Albayati, A.Q.; Ameen, S.H. A Method of Deep Learning Tackles Sentiment Analysis Problem in Arabic Texts. Iraqi J. Comput. Commun. Control. Syst. Eng. 2020, 20, 9–20. [Google Scholar]
  25. Chen, J.; Li, K.; Herrero, P.; Zhu, T.; Georgiou, P. Dilated Recurrent Neural Network for Short-time Prediction of Glucose Concentration. In Proceedings of the KHD@ IJCAI, Stockholm, Schweden, 13 July 2018; pp. 69–73. [Google Scholar]
  26. Yaxiong, L.; Jianqiang, Z.; Deng, P.; Dan, H. A study of speech recognition based on RNN-RBM language model. J. Comput. Res. Dev. 2014, 51, 1936. [Google Scholar]
  27. Sanjuan, E.B.; Cardiel, I.A.; Cerrada, J.A.; Cerrada, C. Message Queuing Telemetry Transport (MQTT) Security: A Cryptographic Smart Card Approach. IEEE Access 2020, 8, 115051–115062. [Google Scholar] [CrossRef]
  28. Salman, A.D. IoT Monitoring System Based on MQTT Publisher/Subscriber Protocol. Iraqi J. Comput. Commun. Control. Syst. Eng. 2020, 20, 75–83. [Google Scholar]
  29. Group, D.R.i.C.N.S. The accuracy of the CGMS™ in children with type 1 diabetes: Results of the Diabetes Research in Children Network (DirecNet) accuracy study. Diabetes Technol. Ther. 2003, 5, 781. [Google Scholar]
  30. Bergmeir, C.; Benítez, J.M. On the use of cross-validation for time series predictor evaluation. Inf. Sci. 2012, 191, 192–213. [Google Scholar] [CrossRef]
  31. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
Figure 1. RBM Structure.
Figure 1. RBM Structure.
Electronics 10 02719 g001
Figure 2. RNN Structure.
Figure 2. RNN Structure.
Electronics 10 02719 g002
Figure 3. The structure of RNN-RBM.
Figure 3. The structure of RNN-RBM.
Electronics 10 02719 g003
Figure 4. The proposed system architecture.
Figure 4. The proposed system architecture.
Electronics 10 02719 g004
Figure 5. The RMSE results of the proposed model in different epochs.
Figure 5. The RMSE results of the proposed model in different epochs.
Electronics 10 02719 g005
Figure 6. The predicted glucose level using the RNN-RBM model versus the actual level.
Figure 6. The predicted glucose level using the RNN-RBM model versus the actual level.
Electronics 10 02719 g006
Table 1. Dataset Information.
Table 1. Dataset Information.
Record IDPatient IDReading DateReading TimeSensor Glucose Level mg/dL
3521/1/2000 0:0010:12 p.m.164
3621/1/2000 0:0010:17 p.m.160
3721/1/2000 0:0010:22 p.m.152
3821/1/2000 0:0010:27 p.m.145
3921/1/2000 0:0010:32 p.m.135
4021/1/2000 0:0010:37 p.m.128
4121/1/2000 0:0010:42 p.m.125
4221/1/2000 0:0010:47 p.m.118
4321/1/2000 0:0010:52 p.m.116
Table 2. The information of the 10 randomly selected patients.
Table 2. The information of the 10 randomly selected patients.
Patient IDGlucose Level Measurement Data Points
11053
51139
241589
451425
601165
711090
801055
851671
971448
1041303
Table 3. Model hyperparameter.
Table 3. Model hyperparameter.
HyperparameterValueSearch Space
Number of hidden units100(50–15)
Window size 20 data points(10–30)
optimizerAdam(Adam, SGD)
Batch size512(64–1024)
Learning rate0.001(0.001–0.1)
epochs150(50–250)
Table 4. The evaluation results.
Table 4. The evaluation results.
Patient NumberPatient IDRNN-RBM MethodRNN Method
RMSEMAERMSEMAE
1115.58913.18518.61713.319
2517.96215.11419.16515.238
32415.74714.35221.42913.318
44515.27811.99018.48513.317
56015.24513.92018.55913.319
67114.53910.85716.59411.646
78015.58912.68418.72012.454
88514.35610.70317.54313.868
99715.13912.22616.54513.626
1010416.44614.22920.52313.085
Average for all Patients15.58912.92618.61813.319
Table 5. Comparison between the proposed RNN-RBM method results and the other well-established methods in the literature.
Table 5. Comparison between the proposed RNN-RBM method results and the other well-established methods in the literature.
MethodRMSEDatasetNumber of PatientsModel Parameters
RNN [7]18.867OhioT1DM6Layers 256
learning rate 0.001
batch size 1024
Adam optimizer
200 epochs
DRNN [8]27.4OhioT1DM6Length of sequences 12
Hidden nodes 32
Batch size 512
learning rate 0.001
JNN [9]18.4Unit of Endocrinology and Diabetology of Campus Bio-Medico University Hospital17Hidden neuro 4
LSTM [10]21.747GoCARB26Layers 4, 8, 64, 8
1300 epochs
CNN [12]19.2OhioT1DM6Layers 5, 64, 32
window of size 16
LSTM [13]37.8OhioT1DM5Layers 3
Learning rate 0.0001
5000 epoches
batch size of 500
ARX [15]19.53OhioT1DM6Alpha = 0.0001
window size 576
VanillaLSTM [15]19.58OhioT1DM6Layers 3, 10
10 epochs
Adam optimization
learning rate 0.001
window size to 576
The proposed RNN-RBM15.589DirecNet10Table 3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nasser, A.R.; Hasan, A.M.; Humaidi, A.J.; Alkhayyat, A.; Alzubaidi, L.; Fadhel, M.A.; Santamaría, J.; Duan, Y. IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes. Electronics 2021, 10, 2719. https://doi.org/10.3390/electronics10212719

AMA Style

Nasser AR, Hasan AM, Humaidi AJ, Alkhayyat A, Alzubaidi L, Fadhel MA, Santamaría J, Duan Y. IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes. Electronics. 2021; 10(21):2719. https://doi.org/10.3390/electronics10212719

Chicago/Turabian Style

Nasser, Ahmed R., Ahmed M. Hasan, Amjad J. Humaidi, Ahmed Alkhayyat, Laith Alzubaidi, Mohammed A. Fadhel, José Santamaría, and Ye Duan. 2021. "IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes" Electronics 10, no. 21: 2719. https://doi.org/10.3390/electronics10212719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop