Next Article in Journal
Using Probabilistic Models for Data Compression
Previous Article in Journal
Meta-Heuristic Optimization of LSTM-Based Deep Network for Boosting the Prediction of Monkeypox Cases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hydrogen Storage Prediction in Dibenzyltoluene as Liquid Organic Hydrogen Carrier Empowered with Weighted Federated Machine Learning

1
Department of Mechanical Engineering, Gachon University, Seongnam 13120, Korea
2
Riphah School of Computing & Innovation, Faculty of Computing, Riphah International University Lahore Campus, Lahore 54000, Pakistan
3
Pattern Recognition and Machine Learning Lab, Department of Software, Gachon University, Seongnam 13120, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3846; https://doi.org/10.3390/math10203846
Submission received: 13 September 2022 / Revised: 14 October 2022 / Accepted: 15 October 2022 / Published: 17 October 2022
(This article belongs to the Topic Machine and Deep Learning)

Abstract

:
The hydrogen stored in liquid organic hydrogen carriers (LOHCs) has an advantage of safe and convenient hydrogen storage system. Dibenzyltoluene (DBT), due to its low flammability, liquid nature and high hydrogen storage capacity, is an efficient LOHC system. It is imperative to indicate the optimal reaction conditions to achieve the theoretical hydrogen storage density. Hence, a Hydrogen Storage Prediction System empowered with Weighted Federated Machine Learning (HSPS-WFML) is proposed in this study. The dataset were divided into three classes, i.e., low, medium and high, and the performance of the proposed HSPS-WFML was investigated. The accuracy of the medium class is higher (99.90%) than other classes. The accuracy of the low and high class is 96.50% and 96.40%, respectively. Moreover, the overall accuracy and miss rate of the proposed HSPS-WFML are 96.40% and 3.60%, respectively. Our proposed model is compared with existing studies related to hydrogen storage prediction, and its accuracy is found in agreement with these studies. Therefore, the proposed HSPS-WFML is an efficient model for hydrogen storage prediction.

1. Introduction

In order to accommodate the fluctuations of renewable energy sources (such as wind and solar), the sustainable energy system of the future will require large-scale energy storage methods. It is also essential to facilitate the transportation of energy from places with high yielding capacity to places with high energy demand. Hydrogen is an ideal candidate for this purpose [1,2,3], but its low volumetric energy density along with high flammability pose major challenges to its technical implementation [4,5,6]. These challenges can be overcome by the liquid organic hydrogen carrier (LOHC) system because the hydrogen storage reaction follows covalent bonding of hydrogen molecules to organic molecules and releases it as a product during the hydrogen production process on demand. The LOHC system can use the existing infrastructure, which may ease storage and transport between these locations [1,4,6].
Among various LOHCs, dibenzyltoluene (H0-DBT) has the potential to be used to chemically store and release hydrogen. H0-DBT not only has a hydrogen storage capacity of 6.2%, but the hydrogenated H0-DBT, i.e., perhydrodibenzyltoluene (H18-DBT), also has a relatively high density of 0.91 kg·L−1. H0-DBT/H18-DBT has a wide range of liquid temperature (−39 °C to 390 °C), and low flammability [1,4,7]. A reaction temperature above 250 °C is essential for hydrogen production from H18-DBT due to the strong endothermal nature of the dehydrogenation reaction to H0-DBT (Equation (1)) [1,4,7].
H18-DBT → H0-DBT + 9H2
For hydrogen storage in H18-DBT, acetone is produced as an acceptor molecule (Equation (2)), instead of evolving hydrogen [8,9], for the reaction temperature levels below 200 °C. In addition to being a valuable chemical, isopropanol can be used in direct fuel cell systems to produce electricity (Equation (3)).
H18-DBT + 9C3H6O → H0-DBT + 9C3H8O
C3H8O + 0.5O2 → C3H6O + H2O
The hydrogenation of H0-DBT has been extensively studied to indicate the optimal reactions in order to achieve higher hydrogen storage capacities [10,11,12]. These studies revealed that the temperature and pressure play a vital role in attaining the maximum gravimetric hydrogen density of 6.2 wt %. Moreover, the selection of an appropriate catalyst also enhances the reaction rate, and the stirrer is important for the homogeneous mixing of the catalyst and H0-DBT. Researchers conducted a series of experiments to determine the optimal conditions for maximum hydrogen storage. This process also consumes a lot of time and energy. However, machine learning algorithms (MLAs) have evolved as an emerging approach to predict hydrogen storage using existing data. Using this method, researchers can predict hydrogen storage efficiently in a short period of time. Researchers solve the fundamental equations explicitly as an alternative to classical simulations and computational approaches using machine learning algorithms. Hence, machine learning algorithms are becoming useful for reliable hydrogen storage predictions using past data or generating reliable data.
MLAs were applied to a variety of materials in recent years, including thermoelectric [13,14,15,16], perovskite solids [17,18,19,20], carbon-capture materials [21,22], electrocatalysts [23,24], oxides and inorganic materials [25,26,27,28], interphase precipitation in micro-alloyed steels [29] and light-emitting transistors [30]. In H2-selective nanocomposite membranes, MLAs have also been developed to predict C3H8, H2, CH4 and CO2 sorption [31]. Rezakazemi et al. compared the performance of H2-selective mixed matrix membranes (MMM) under various operational conditions using an adaptive neuro-fuzzy inference system (ANFIS) [32]. To predict the gas diffusion in binary filler nanocomposite membranes, hybrid machine-learning models were applied [33]. For hydrogen storage prediction in metal hydrides, Rahnama et al. used various regression algorithms such as linear regression, neural network regression, Bayesian linear regression and boosted decision tree regression. They reported that the hydrogen storage capacities increased with the increment in reaction temperature, and boosted decision tree regression was found to be the optimal algorithm yielding the highest coefficient of determination [34,35]. In a recent work, the extremely randomized trees (ERT) algorithm was the most accurate for the prediction of gravimetric and volumetric hydrogen capacities in metal–organic frameworks by Ahmed et al. [36]. However, hydrogen storage in H0-DBT is an efficient process, and it will be useful to predict hydrogen storage in H0-DBT. Hydrogen storage prediction has not yet been reported, to the best of our knowledge. Hence, hydrogen storage prediction in H0-DBT is investigated in the current work, empowered with federated machine learning.
The federated learning framework has gained popularity in recent years because of its high level of assurance in learning with a small amount of secured data. Instead of integrating data from multiple databases or relying on outmoded discovery and replication techniques, it allows for training a global model using a fundamental server while still preserving information within the organization. A global model is constructed by collaborating between organizations through this technique, known as federated learning (FL). Using FL, we can develop a master model using training data from several sources rather than directly transmitting data. Several machine learning algorithms have been developed for the prediction of hydrogen storage in various materials. Despite not sharing data, FL performs well as it is a powerful machine learning (ML) approach. FL uses machine learning models to enhance data privacy and security [37], particularly to ensure that the FL process and data are secure.
This ensures that the privacy of the data is safeguarded in several locations. With the use of FL, a number of businesses or academic institutions collaborate under the direction of a central server or facility provider to solve an ML problem. The ML model is trained by dispersing the situation among disparate central data centers, including hospitals or other healthcare-related institutions, while preserving localized data. Throughout the whole training process, data are kept private. As opposed to outdated DL, which delivered secured data to a single server, modern DL maintains a common global architecture that any institution can use. Then, each organization creates their model using the data. The model’s gradient of inaccuracy is then used by each center to transmit data to the server. The central server gathers all participant feedback, and, based on predetermined criteria, changes the global model. The model evaluates the response’s quality using predetermined criteria; as a result, it only includes useful information. In other words, centers reporting poor or unusual results might not receive any attention. Until the global model is learned in a single federated learning round, this technique is utilized. The complete design of FL [38] is shown in Figure 1, but with significant changes.
Edge computing refers to the process of physically bringing computer capacity closer to the source of data, which is typically an Internet of Things (IoT) device or sensor. Edge computing refers to the process of sending computational power to the network or device’s edge, allowing for faster data processing, larger bandwidth and data sovereignty. Edge computing reduces the need for large volumes of data to travel between servers, the cloud and devices or edge locations to be processed by processing data at the network’s edge. This is particularly true in modern applications such as data science and artificial intelligence. The goal of edge computing [39] is to bring data sources and devices closer together, making it time-efficient. Hence, in theory, the application and device operate more effectively and efficiently as a result.
The transfer learning method develops a model for one problem which is used in some way to solve another problem that is related to it. A deep learning approach called transfer learning involves training a neural network model on a problem that is comparable to the one being solved. Then, a new model is trained on the relevant problem using the layers of the previously learned model. Transfer learning reduces generalization error while accelerating the training of neural network models. The training procedure might start with the usage of weights in previously utilized layers, and when needed, the focus can shift to the new challenge. Transfer learning is seen as a particular weight initialization strategy in this setting.
In this study, hydrogen storage prediction is conducted in dibenzyltoluene. The weighted federated machine learning is considered for this purpose, and the Hydrogen Storage Prediction System empowered with Weighted Federated Machine Learning (HSPS-WFML) is proposed. The performance of the proposed model is investigated in terms of various statistical parameters such as accuracy, miss rate, recall, selectivity and precision.

2. Materials and Methods

Figure 2 depicts the representation of various layers of our proposed HSPS-WFML model. Hydrogen storage predictions can be found using the wide range of datasets that data sources produce. A range of data sources, including pressure sensors, temperature detectors and other devices, are used to collect data. All of these devices are connected to a data acquisition system to record the data. The recorded data are then forwarded to the pre-processing layer. Data filtering and redundant data cleansing are two examples of data pre-processing tasks. The three ANN techniques of Levenberg–Marquardt (LM), Bayesian Regularization (BR) and Scaled Conjugate Gradient (SCG) are applied separately in the training phase, and their performance is evaluated in terms of accuracy, precision, sensitivity and selectivity. If the trained model is not satisfied, the model is retrained for these three algorithms. However, if the trained model is satisfied, all three models are combined in the next step to generate a final federated ML model. The trained federated model performance is evaluated: if the model performs satisfactorily, the model is stored in the cloud data storage; otherwise, the trained model is retrained.
The purpose of using each technique on the client side is to collect the dataset and calculate the optimum weights to be used on the server side. As the dataset is divided into three classes, i.e., low, medium and high, three different techniques were adopted simultaneously. The weights from the trained model were then sent to the server side where the FML was applied for the hydrogen storage prediction. The main advantage of FML is data security, as well as the improved accuracy of the system.
In this study, an adaptive back propagation neural network (ABPNN) with three layers—input layer, hidden layer and output layer—is utilized to forecast hydrogen storage. This section describes the proposed HSPS-WFML mathematical model. The input features are represented as [s1, s2, s3, …sn], and t, f and k show the element index in each layer. The bias added in each layer is shown as c1 and c2. af,t represents the weights between the input layer and the hidden layer, and the weights between the hidden layer and the output layer are shown as bf,n. n, k and g are the total number of elements in the input layer, hidden layer and output layer, respectively, which displays the dimensions of each layer. The output at each neuron of the hidden layer can be calculated using Equation (4) [40], in which w c l i f represents the output of the i t h client c l i of the f t h hidden neuron.
w c l i f = 1 1 + e - ( c 1 + t = 1 k ( a c l i t , f * s t ) )   w h e r e   1 f   k  
Similarly, x c l i n represents the output at the output layer at the n t h neuron in Equation (5) [41].
x c l i n = 1 1 + e - ( c 2 + f = 1 k ( b c l i f , n * w c l i f )   w h e r e   1 n   g
F c l i = 1 2 n ( β c l i n - x c l i n ) 2
where F c l i represents the i t h client error and β c l i n   and x c l i n denote the expected and anticipated outputs in Equation (6) [42], respectively.
The variation in weight for the output layer is stated in Equation (7) [40,41] as
Δ A     - F c l i A c l i
Δ B     - F c l i B c l i
Δ b c l i f , n     - F c l i b c l i f , n
Δ a c l i f , t     - F c l i a c l i t , f
After applying the chain rule method, the above equation can be written as
Δ b c l i f , n = -   ζ F c l i x c l i n × x c l i n b c l i f , n
where ζ represents the constant. The value of weight altered can be derived by swapping the values in Equation (8), as shown in Equation (9).
Δ b c l i f , n = ζ ( β c l i n - x c l i n ) × x c l i n ( 1 - x c l i n ) × w c l i f
Δ b c l i f , n = ζ λ c l i n w c l i f
where
λ c l i n = ( β c l i n - x c l i n ) × x c l i n ( 1 - x c l i n )
For updating the weights between the input and hidden layers, we use the chain rule.
Δ a c l i t , f -   [ n F cli x cli n × x cli n w cli f ] × w c l i f a c l i t , f   Δ a c l i t , f = - ζ [ n F c l i x c l i n × x c l i n w c l i f ] × w c l i f a c l i t , f
Δ a c l i t , f = ζ   n ( β c l i n - x c l i n ) × x c l i n ( 1 - x c l i n ) × ( b c l i f , n ) × w c l i f ( 1 - w c l i f ) × s t
Δ a c l i t , f = ζ   [ n λ c l i n *   b c l i f , n ] × w c l i f ( 1 - w c l i f ) × s t
After simplifying the equation, it may be expressed as follows:
Δ a c l i t , f = ζ α c l i f   × s t
where
α c l i f = [ n λ c l i n *   b c l i f , n ] × w c l i f ( 1 - w c l i f )
b c l i f , n ( d + 1 ) = b c l i f , n ( d ) + γ   Δ b c l i f , n
The weights between the output and hidden layers are adjusted using Equation (20) [41,42]. The weights between the input and hidden layers are updated using Equation (21) [40,42].
a c l i t , f ( d + 1 ) = a c l i t , f ( d ) + γ   Δ a c l i t , f

2.1. Proposed ith Client Machine Learning Algorithm

Table 1 shows the pseudo code of the proposed machine learning algorithm which executes at the ith client.

2.2. Transfer of Weights

These weights are then transferred to the cloud or federated server. To secure this system, these weights can be encrypted and then transmitted. In this study, encrypting the weights is not used; rather, it is left as an additional entity that can be added as per application requirements.

2.3. Federated Server

Each client is transmitting its optimum weight ( A I H c l i , B H O c l i ) to the federated server. In our case, each client is trained through one of the following ANN techniques: (1) Levenberg–Marquardt (LM), (2) Bayesian Regularization (BR) or (3) Scaled Conjugate Gradient (SCG). The optimized weights of the LM algorithm, BR algorithm and SCG algorithm are given in Equations (22)–(24), respectively.
A I H c l 1 ( L M   ) = ( a 11 1 a 1 c n 1 a r m 1 1 a r m c n 1 ) d 1 * d 2
A I H c l 2 ( B R ) = ( a 11 2 a 1 c n 2 a r m 1 1 a r m c n 2 ) d 3 * d 4
A I H c l 3 ( S C G   ) = ( a 11 3 a 1 c n 3 a r m 1 3 a r mc n 3 ) d 5 * d 6
The combined optimal weights for the federated server for the input layer to the hidden layer can be stated using Equation (25), in which A I H n ( F S ) represents the aggregated weights of all locally trained clients.
A I H n ( F S ) = A I H c l 1 ( L M ) + A I H c l 2 ( B R ) + A c l 3 ( S C G )
This aggregation faces an issue of the addition property of the matrix, for the addition of the matrix of the dimensions should be consistent. It is clear from Equation (25) that all locally trained matrices cannot be added since they do not have the same dimensions. To cope with this issue, the dimensions of all the concerned matrices should be the same. For this, we concatenate a zero matrix with each matrix, where required.
For this, using Equation (26), we find the maximum length of rows from all locally trained clients.
Mathematics 10 03846 i001
Similarly, we find the maximum length of columns from all locally trained clients using Equation (27).
Mathematics 10 03846 i002
To embed the zero matrix with each optimum weight matrix, the following procedure will be used. In Equations (28)–(30), Z M L M , Z M B R and Z M S C G represent the zero matrix for LM, BR and SCG algorithms, respectively; this will generate a matrix of zeros. These zero matrices will be horizontally concatenated with each locally trained model weight.
Z M I H - L M = z e r o s ( M a x r - I H ,   M a x c - I H - d 2 )  
Z M I H - B R = z e r o s ( M a x r - I H ,   M a x c - I H - d 4 )
Z M I H - S C G = z e r o s ( M a x r - I H ,   M a x c - I H - d 6 )
The horizontal concatenation is given in Equations (31)–(33).
A I H - L M = h o r c a t ( Z M L M , a I H ( L M ) )
A I H - B R = h o r c a t ( Z M B R , a I H ( B R ) )
A I H - S C G = h o r c a t ( Z M S C G , a I H ( S C G ) )
In Equations (31)–(33), A I H L M , A I H B R and A I H S C G have the same dimensions; thus, now these matrix can be aggregated to each other. To obtain the federated server or global model, we use Equation (34).
A I H - F S = 2 A I H - L M + A I H - B R + 0 . 5 A I H - S C G
In Equation (34), A I H F S represents the optimum federated weights between the input layer and hidden layer. The locally trained clients are given different scaling factors based on their performance.

2.4. Optimal Weights of Hidden Output Layer

Like the input layer to the hidden layer, the optimal weights of the hidden layer to the output layer for LM, BR and SCG algorithms can be stated using Equations (35)–(37).
B H O c l 1 ( L M ) = ( b 11 1 b 1 c n 1 b r m 1 1 b r m c n 1 ) d 7 * d 8
B H O c l 2 ( B R   ) = ( b 11 2 b 1 c n 2 b r m 1 2 b r m c n 2 ) d 9 * d 10
B H O c l 3 ( S C G   ) = ( b 11 3 b 1 c n 3 b r m 1 3 b r m c n 3 ) d 11 * d 12
B H O c l i ( F S ) = B H O c l 1 ( L M ) + B H O c l 2 ( B R ) + B H O c l 3 ( S C G )
The federated weights can be obtained using Equation (38), but this fusion also faces the same issue of dimension inconsistency. The same procedure will be applied to all client weight matrices to make their dimensions consistent.
Mathematics 10 03846 i003
Mathematics 10 03846 i004
Z M H O - L M   = z e r o s ( M a x r - H O ,   M a x c - H O - d 2 )
Z M H O - B R = z e r o s ( M a x r - H O ,   M a x c - H O - d 4 )
Z M H o - S C G = z e r o s ( M a x r - H O ,   M a x c - H O - d 6 )
B H O - L M = h o r c a t ( Z M H O - L M , b H O ( L M ) )
B H O - B R   = h o r c a t ( Z M H O - B R , b H O ( B R ) )
B H O - S C G = h o r c a t ( Z M H O - S C G , b H O ( S C G ) )
B H O - F S   = 2 B H O - L M + B H O - B R + 0 . 5 B H O - S C G
In Equation (47), B H O F S represents the optimum federated weights of the hidden layer to the output layer. The locally trained clients are given different scaling factors based on their performance.

2.5. Proposed Weighted Federated Machine Learning Algorithm Pseudo Code

Table 2 shows the pseudo code of the proposed weighted federated machine learning algorithm which executes on the server side.

2.6. Edge Device

These global model weights are conveyed to a local network or edge devices. Then, these edge devices can use this global model to detect storage activity.
Finally, the proposed HSPS-WFML imports the stored data to the cloud to predict the hydrogen storage in the validation phase. The proposed HSPS-WFML model classifies hydrogen storage as low class (up to 1 wt %), medium class (up to 2 wt %) and high class (above 2 wt %) depending on the hydrogen storage capacity. The residential sector utilizes the low-class hydrogen storage. However, medium- and high-class hydrogen storage are useful in industrial sectors and automobiles, respectively. The proposed HSPS-WFML model helps to indicate the optimal reaction conditions for researchers.

3. Simulations and Results

The proposed federated learning-based model was simulated using MATLAB. The total number of dataset instances was 151,388, adapted from a previous study [11]. The dataset was randomly divided into training (70% of the samples, 105,971) and validation (30% of the samples, 45,417). Various statistical parameters such as accuracy, misclassification rate (MCR), selectivity, recall, precision, false positive rate, false omission rate (FOR), false discovery rate (FDR), F0.5 score and F1 score are considered for investigating the performance of the proposed HSPS-WFML model [43,44,45].
Accuracy = O S i I S i   + O S k I S k   O S i I S i   + j = 1 n ( O S j ,   j   i I S j + O S k I S k   + l = 1 n ( O S l ,   l   k I S k   ,   where   i / j / k / l = 1 ,   2 ,   3 ,   ,   n  
Miss   rate = l = 1 n ( O S l ,   l   k I S k l = 1 n ( O S l ,   l   k I S k + O S i I S i   ,   where   i / k / l = 1 ,   2 ,   3 ,   ,   n  
True   Positive   Rate / Recall = O S i I S i   O S i I S i   + l = 1 n ( O S l ,   l   k I S k ,   where   i / k / l = 1 ,   2 ,   3 ,   ,   n  
True   Negative   Rate / Selectivity = O S k I S k   O S k I S k   + j = 1 n ( O S j ,   j   i I S j ,   where   j / k = 1 ,   2 ,   3 ,   ,   n  
Precision = O S i I S i   O S i I S i   + j = 1 n ( O S j ,   j   i I S j ,   where   i / j = 1 ,   2 ,   3 ,   ,   n  
False   Omission   Rate = l = 1 n ( O S l ,   l   k I S k l = 1 n ( O S l ,   l   k I S k + OSk ISk ,   where   k / l = 1 ,   2 ,   3 ,   ,   n  
False   Discovery   Rate = j = 1 n ( O S j ,   j   i I S j O S i I S i   + j = 1 n ( O S j ,   j   i I S j ,   where   i / j = 1 ,   2 ,   3 ,   ,   n  
F 0.5   Score = 1.25 × Precision × Recall 0 . 25   × Precision + Recall
F 1   Score = 2 × Precision × Recall Precision + Recall

4. Discussion

Performance Analysis of The Weighted Federated Learning

The performance of proposed HSPS-WFML model was investigated using various statistical parameters, and the results are presented in Table 3. The performance of the proposed HSPS-WFML model was evaluated for all three classes i.e., low, medium and high. The proposed model yielded an accuracy of 99.90% for the medium class, whereas it was 96.50% and 96.40% for the low and high classes. Moreover, MCR was slightly higher (3.60%) for the high class in comparison to 3.50% for the low class. However, the MCR was quite lower (0.10%) for the medium class when compared to the other two classes. Furthermore, the sensitivity was 99.70% and 96.00% for the medium and high classes, whereas it was slightly lower (92.55%) for the low class. The selectivity was 98.20%, 99.99% and 96.60% for low, medium and high classes, respectively, which is close for all the three classes. The proposed model was found to be more precise for the medium class and obtained a precision value of 99.99%. However, the proposed model was less precise for the high class, showing a precision value of 92.90%. The FOR was calculated to be 3.15%, 0.16% and 1.90% for low, medium and high classes, respectively. The FDR was 7.10% for the high class, slightly higher than 4.40% for the low class, and the medium class was comparatively low (0.001%). The proposed model obtained F0.5 Scores of 95.00%, 99.95% and 93.50% for the low, medium and high classes, respectively. The F1 Scores for the proposed model were 94.10%, 99.85% and 94.40% for the low, medium and high classes, respectively. These results elucidate that the proposed HSPS-WFML model performance was higher for the medium class in comparison to the other two classes.
Figure 3 depicts the class level accuracy, miss rate, sensitivity, selectivity and precision of the proposed model.
Moreover, the overall accuracy of the proposed model was 96.40% and the MCR was 3.60%, as depicted in Figure 4. In future work, the overall accuracy can be improved further by increasing the accuracies of low and high classes. Different machine learning algorithms such as support vector machine (SVM), Deep extreme machine learning and particle swam optimization can be applied on the server side, which may help further improve the accuracy of the low and high classes.
Table 4 presents the comparison of the current work with the existing studies. It is observed from Table 4 that the accuracy of our proposed HSPS-WFML model is in agreement with the previous studies.

5. Conclusions

A hydrogen storage system using DBT as an LOHC is a promising technique. The investigation of optimal reaction conditions consumes significant effort and time. However, hydrogen storage prediction coupled with federated learning can play a vital role in indicating the optimal reaction conditions. We proposed the HSPS-WFML model to predict the hydrogen storage capacity in dibenzyltoluene. The accuracy of the proposed model for low, medium and high classes is 96.50%, 99.90% and 96.40%, respectively. The overall accuracy of the hydrogen storage prediction is 96.40%, and the misclassification rate is 3.60%. The results elucidate that the proposed HSPS-WFML model predicted the hydrogen storage efficiently. Hence, the proposed HSPS-WFML model can be regarded as an accurate model for hydrogen storage prediction.

Author Contributions

Conceptualization, methodology, A.A.; software, validation, M.A.K.; data curation, A.A.; writing—original draft, investigation, formal analysis, A.A. and M.A.K.; writing—review and editing, supervision, funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Gachon University research fund of 2017 (GCU-2017-0204).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Preuster, P.; Alekseev, A.; Wasserscheid, P. Hydrogen storage technologies for future energy systems. Annu. Rev. Chem. Biomol. Eng. 2017, 8, 445–471. [Google Scholar] [CrossRef] [PubMed]
  2. Ali, A.; Rohini, A.K.; Noh, Y.S.; Moon, D.J.; Lee, H.J. Hydrogenation of dibenzyltoluene and the catalytic performance of Pt/Al2O3 with various Pt loadings for hydrogen production from perhydro-dibenzyltoluene. Int. J. Energy Res. 2022, 46, 6672–6688. [Google Scholar] [CrossRef]
  3. Ali, A.; Rohini, A.K.; Lee, H.J. Dehydrogenation of perhydro-dibenzyltoluene for hydrogen production in a microchannel reactor. Int. J. Hydrogen Energy 2022, 47, 20905–20914. [Google Scholar] [CrossRef]
  4. Niermann, M.; Drünert, S.; Kaltschmitt, M.; Bonhoff, K. Liquid organic hydrogen carriers (LOHCs)–techno-economic analysis of LOHCs in a defined process chain. Energy Environ. Sci. 2019, 12, 290–307. [Google Scholar] [CrossRef]
  5. Müller, K. Technologies for the Storage of Hydrogen Part 1: Hydrogen Storage in the Narrower Sense. ChemBioEng Rev. 2019, 6, 72–80. [Google Scholar] [CrossRef]
  6. Jang, M.; Jo, Y.S.; Lee, W.J.; Shin, B.S.; Sohn, H.; Jeong, H.; Jang, S.C.; Kwak, S.K.; Kang, J.W.; Yoon, C.W. A high-capacity, reversible liquid organic hydrogen carrier: H2-release properties and an application to a fuel cell. ACS Sustain. Chem. Eng. 2018, 7, 1185–1194. [Google Scholar] [CrossRef]
  7. Brückner, N.; Obesser, K.; Bösmann, A.; Teichmann, D.; Arlt, W.; Dungs, J.; Wasserscheid, P. Evaluation of Industrially applied heat-transfer fluids as liquid organic hydrogen carrier systems. ChemSusChem 2014, 7, 229–235. [Google Scholar] [CrossRef]
  8. Geburtig, D.; Preuster, P.; Bösmann, A.; Müller, K.; Wasserscheid, P. Chemical utilization of hydrogen from fluctuating energy sources—Catalytic transfer hydrogenation from charged Liquid Organic Hydrogen Carrier systems. Int. J. Hydrogen Energy 2016, 41, 1010–1017. [Google Scholar] [CrossRef] [Green Version]
  9. Geburtig, D. Transfer Hydrogenation Using Liquid Organic Hydrogen Carrier Systems as Hydrogen Source. Doctoral Dissertation, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany, 2019. [Google Scholar]
  10. Dürr, S.; Zilm, S.; Geißelbrecht, M.; Müller, K.; Preuster, P.; Bösmann, A.; Wasserscheid, P. Experimental determination of the hydrogenation/dehydrogenation-Equilibrium of the LOHC system H0/H18-dibenzyltoluene. Int. J. Hydrogen Energy 2021, 46, 32583–32594. [Google Scholar] [CrossRef]
  11. Feng, X.; Jiang, L.; Li, Z.; Wang, S.; Ye, J.; Wu, Y.; Yuan, B. Boosting the hydrogenation activity of dibenzyltoluene catalyzed by Mg-based metal hydrides. Int. J. Hydrogen Energy 2022, 47, 23994–24003. [Google Scholar] [CrossRef]
  12. Shi, L.; Qi, S.; Qu, J.; Che, T.; Yi, C.; Yang, B. Integration of hydrogenation and dehydrogenation based on dibenzyltoluene as liquid organic hydrogen energy carrier. Int. J. Hydrogen Energy 2019, 44, 5345–5354. [Google Scholar] [CrossRef]
  13. Sparks, T.D.; Gaultois, M.W.; Oliynyk, A.; Brgoch, J.; Meredig, B. Data mining our way to the next generation of thermoelectrics. Scr. Mater. 2016, 111, 10–15. [Google Scholar] [CrossRef]
  14. Yan, J.; Gorai, P.; Ortiz, B.; Miller, S.; Barnett, S.A.; Mason, T.; Stevanović, V.; Toberer, E.S. Material descriptors for predicting thermoelectric performance. Energy Environ. Sci. 2015, 8, 983–994. [Google Scholar] [CrossRef]
  15. Seshadri, R.; Sparks, T.D. Perspective: Interactive material property databases through aggregation of literature data. APL Mater. 2016, 4, 053206. [Google Scholar] [CrossRef] [Green Version]
  16. Oliynyk, A.O.; Antono, E.; Sparks, T.D.; Ghadbeigi, L.; Gaultois, M.W.; Meredig, B.; Mar, A. High-throughput machine-learning-driven synthesis of full-Heusler compounds. Chem. Mater. 2016, 28, 7324–7331. [Google Scholar] [CrossRef] [Green Version]
  17. Pilania, G.; Balachandran, P.V.; Gubernatis, J.E.; Lookman, T. Classification of ABO3 perovskite solids: A machine learning study. Acta Crystallogr. Sect. B Struct. Sci. Cryst. Eng. Mater. 2015, 71, 507–513. [Google Scholar] [CrossRef]
  18. Pilania, G.; Balachandran, P.V.; Kim, C.; Lookman, T. Finding new perovskite halides via machine learning. Front. Mater. 2016, 3, 19. [Google Scholar] [CrossRef] [Green Version]
  19. Balachandran, P.V.; Broderick, S.R.; Rajan, K. Identifying the ‘inorganic gene’ for high-temperature piezoelectric perovskites through statistical learning. Proc. R. Soc. A Math. Phys. Eng. Sci. 2011, 467, 2271–2290. [Google Scholar] [CrossRef] [Green Version]
  20. Pilania, G.; Mannodi-Kanakkithodi, A.; Uberuaga, B.P.; Ramprasad, R.; Gubernatis, J.E.; Lookman, T. Machine learning bandgaps of double perovskites. Sci. Rep. 2016, 6, 1–10. [Google Scholar] [CrossRef] [Green Version]
  21. Wilmer, C.E.; Leaf, M.; Lee, C.Y.; Farha, O.K.; Hauser, B.G.; Hupp, J.T.; Snurr, R.Q. Large-scale screening of hypothetical metal–organic frameworks. Nat. Chem. 2012, 4, 83–89. [Google Scholar] [CrossRef]
  22. Lin, L.C.; Berger, A.H.; Martin, R.L.; Kim, J.; Swisher, J.A.; Jariwala, K.; Rycroft, C.H.; Bhown, A.S.; Deem, M.W.; Haranczyk, M.; et al. In silico screening of carbon-capture materials. Nat. Mater. 2012, 11, 633–641. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Greeley, J.; Jaramillo, T.F.; Bonde, J.; Chorkendorff, I.B.; Nørskov, J.K. Computational high-throughput screening of electrocatalytic materials for hydrogen evolution. Nat. Mater. 2006, 5, 909–913. [Google Scholar] [CrossRef] [PubMed]
  24. Hong, W.T.; Welsch, R.E.; Shao-Horn, Y. Descriptors of oxygen-evolution activity for oxides: A statistical evaluation. J. Phys. Chem. C 2016, 120, 78–86. [Google Scholar] [CrossRef]
  25. Kim, E.; Huang, K.; Tomala, A.; Matthews, S.; Strubell, E.; Saunders, A.; McCallum, A.; Olivetti, E. Machine-learned and codified synthesis parameters of oxide materials. Sci. Data 2017, 4, 1–9. [Google Scholar] [CrossRef] [Green Version]
  26. Sumpter, B.G.; Vasudevan, R.K.; Potok, T.; Kalinin, S.V. A bridge for accelerating materials by design. NPJ Comput. Mater. 2015, 1, 1–11. [Google Scholar] [CrossRef] [Green Version]
  27. Kalinin, S.V.; Sumpter, B.G.; Archibald, R.K. Big–deep–smart data in imaging for guiding materials design. Nat. Mater. 2015, 14, 973–980. [Google Scholar] [CrossRef]
  28. Kim, E.; Huang, K.; Jegelka, S.; Olivetti, E. Virtual screening of inorganic materials synthesis parameters with deep learning. NPJ Comput. Mater. 2017, 3, 1–9. [Google Scholar] [CrossRef] [Green Version]
  29. Rahnama, A.; Clark, S.; Sridhar, S. Machine learning for predicting occurrence of interphase precipitation in HSLA steels. Comput. Mater. Sci. 2018, 154, 169–177. [Google Scholar] [CrossRef]
  30. Gómez-Bombarelli, R.; Aguilera-Iparraguirre, J.; Hirzel, T.D.; Duvenaud, D.; Maclaurin, D.; Blood-Forsythe, M.A.; Chae, H.S.; Einzinger, M.; Ha, D.G.; Wu, T.; et al. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nat. Mater. 2016, 15, 1120–1127. [Google Scholar] [CrossRef]
  31. Dashti, A.; Harami, H.R.; Rezakazemi, M. Accurate prediction of solubility of gases within H2-selective nanocomposite membranes using committee machine intelligent system. Int. J. Hydrogen Energy 2018, 43, 6614–6624. [Google Scholar] [CrossRef]
  32. Rezakazemi, M.; Dashti, A.; Asghari, M.; Shirazian, S. H2-selective mixed matrix membranes modeling using ANFIS, PSO-ANFIS, GA-ANFIS. Int. J. Hydrogen Energy 2017, 42, 15211–15225. [Google Scholar] [CrossRef]
  33. Rezakazemi, M.; Azarafza, A.; Dashti, A.; Shirazian, S. Development of hybrid models for prediction of gas permeation through FS/POSS/PDMS nanocomposite membranes. Int. J. Hydrogen Energy 2018, 43, 17283–17294. [Google Scholar] [CrossRef]
  34. Rahnama, A.; Zepon, G.; Sridhar, S. Machine learning based prediction of metal hydrides for hydrogen storage, part I: Prediction of hydrogen weight percent. Int. J. Hydrogen Energy 2019, 44, 7337–7344. [Google Scholar] [CrossRef]
  35. Rahnama, A.; Zepon, G.; Sridhar, S. Machine learning based prediction of metal hydrides for hydrogen storage, part II: Prediction of material class. Int. J. Hydrogen Energy 2019, 44, 7345–7353. [Google Scholar] [CrossRef]
  36. Ahmed, A.; Seth, S.; Purewal, J.; Wong-Foy, A.G.; Veenstra, M.; Matzger, A.J.; Siegel, D.J. Exceptional hydrogen storage achieved by screening nearly half a million metal-organic frameworks. Nat. Commun. 2019, 10, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Abad, G.; Picek, S.; Urbieta, A. SoK: On the Security & Privacy in Federated Learning. arXiv 2021, arXiv:2112.05423. [Google Scholar]
  38. Federated Learning: Predictive Model Without Data Sharing–Sparkd AI. Available online: https://sparkd.ai/federated-learning (accessed on 28 August 2022).
  39. Oueida, S.; Kotb, Y.; Aloqaily, M.; Jararweh, Y.; Baker, T. An edge computing based smart healthcare framework for resource management. Sensors 2018, 18, 4307. [Google Scholar] [CrossRef] [Green Version]
  40. Ata, A.; Khan, M.A.; Abbas, S.; Ahmad, G.; Fatima, A. Modelling smart road traffic congestion control system using machine learning techniques. Neural Netw. World 2018, 29, 99–110. [Google Scholar] [CrossRef] [Green Version]
  41. Rehman, A.; Athar, A.; Khan, M.A.; Abbas, S.; Fatima, A.; Saeed, A. Modelling, simulation, and optimization of diabetes type II prediction using deep extreme learning machine. J. Ambient. Intell. Smart Environ. 2020, 12, 125–138. [Google Scholar] [CrossRef]
  42. Khan, A.H.; Khan, M.A.; Abbas, S.; Siddiqui, S.Y.; Saeed, M.A.; Alfayad, M.; Elmitwally, N.S. Simulation, modeling, and optimization of intelligent kidney disease predication empowered with computational intelligence approaches. CMC-Comput. Mater. Continua 2021, 67, 1399–1412. [Google Scholar] [CrossRef]
  43. Khan, M.A.; Abbas, S.; Rehman, A.; Saeed, Y.; Zeb, A.; Uddin, M.I.; Nasser, N.; Ali, A. A machine learning approach for blockchain-based smart home networks security. IEEE Netw. 2020, 35, 223–229. [Google Scholar] [CrossRef]
  44. Khan, M.A.; Abbas, S.; Atta, A.; Ditta, A.; Alquhayz, H.; Khan, M.F.; Naqvi, R.A. Intelligent cloud based heart disease prediction system empowered with supervised machine learning. CMC-Comput. Mater. Continua 2020, 65, 139–151. [Google Scholar] [CrossRef]
  45. Mehmood, S.; Ghazal, T.M.; Khan, M.A.; Zubair, M.; Naseem, M.T.; Faiz, T.; Ahmad, M. Malignancy Detection in Lung and Colon Histopathology Images Using Transfer Learning With Class Selective Image Processing. IEEE Access 2022, 10, 25657–25668. [Google Scholar] [CrossRef]
  46. Thornton, A.W.; Simon, C.M.; Kim, J.; Kwon, O.; Deeg, K.S.; Konstas, K.; Pas, S.J.; Hill, M.R.; Winkler, D.A.; Haranczyk, M.; et al. Materials genome in action: Identifying the performance limits of physical hydrogen storage. Chem. Mater. 2017, 29, 2844–2854. [Google Scholar] [CrossRef]
  47. Bucior, B.J.; Bobbitt, N.S.; Islamoglu, T.; Goswami, S.; Gopalan, A.; Yildirim, T.; Farha, O.K.; Bagheri, N.; Snurr, R.Q. Energy-based descriptors to rapidly predict hydrogen storage in metal–organic frameworks. Mol. Syst. Des. Eng. 2019, 4, 162–174. [Google Scholar] [CrossRef]
  48. Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009; pp. 1–758. [Google Scholar]
Figure 1. Flow chart of the federated learning model.
Figure 1. Flow chart of the federated learning model.
Mathematics 10 03846 g001
Figure 2. Hydrogen Storage Prediction System empowered with Weighted Federated Machine Learning (HSPS-WFML).
Figure 2. Hydrogen Storage Prediction System empowered with Weighted Federated Machine Learning (HSPS-WFML).
Mathematics 10 03846 g002
Figure 3. Statistical parameters for low, medium and high classes.
Figure 3. Statistical parameters for low, medium and high classes.
Mathematics 10 03846 g003
Figure 4. The overall accuracy and MCR for hydrogen storage prediction using the proposed HSPS-WFML model.
Figure 4. The overall accuracy and MCR for hydrogen storage prediction using the proposed HSPS-WFML model.
Mathematics 10 03846 g004
Table 1. Proposed ith client machine learning pseudo code.
Table 1. Proposed ith client machine learning pseudo code.
Client Training Algorithm ( d ,   A I H c l i , B H O c l i )
1. Start
2. Local data splitting to small groups of size Cs
3. Initialize both layers i.e., input layer and hidden layer weights (( A I H c l i , B H O c l i )), F c l i = 0 and number of epochs d = 0
4. For every small group (Cs)
            i. Apply the feedforward phase to
                              a. Calculate w c l i f   using Equation (4)
                              b. Calculate estimated output ( x c l i n )using Equation (5)
            ii. Calculate the Error values ( F c l i ) using Equation (6)
            iii. weights updating phase
                              a. Calculate Δ b c l i f , n using Equation (13)
                              b. Calculate Δ a c l i f , t using Equation (18)
                              c. Update the weights between hidden and output layers b c l i f , n ( d + 1 ) using Equation (20)
                              d. Update the weights between input and hidden layers a c l i f , t ( d + 1 ) using Equation (21)
            if stopping Criteria do not meet, then
            go to step 4
            else, go to step 5
5. Return optimum weights ( A I H c l i , B H O c l i ) to Federated Server
Stop
Table 2. Proposed weighted federated machine learning algorithm pseudo code.
Table 2. Proposed weighted federated machine learning algorithm pseudo code.
1. Start
2. Initialize weights ( A I H - F S ,   B H O - F S )
3. For each cycle Do
                            for each client Do
                                            [ A I H c l i , B H O c l i ] = Client   ( d ,   A I H c l i ,   B H O c l i )
                            End
    End
4. Calculate B H O - F S using Equation (47)
5. Calculate A I H - F S using Equation (34)
6. Prediction of unknown data samples
    a. for I = No. of Samples
            i. Calculate w f F S = 1 1 + e - ( c 1 + t = 1 k ( a c l i f , t * s t ) )   where   1 f   k
            ii. Calculate x n F S = 1 1 + e - ( c 2 + f = 1 k ( b c l i f , n * w c l i f )   where   1 n   g
            iii. Calculate error F F S = 1 2 n = 1 g ( β n F S   x n F S ) 2
7. Stop
Table 3. Comparison of statistical parameters for the three classes.
Table 3. Comparison of statistical parameters for the three classes.
Low ClassMedium ClassHigh Class
Accuracy96.50%99.90%96.40%
Misclassification Rate3.50%0.10%3.60%
Recall/Sensitivity92.55%99.70%96.00%
Selectivity98.20%99.99%96.60%
Precision95.60%99.99%92.90%
False Omission Rate3.15%0.16%1.90%
False Discovery Rate4.40%0.001%7.10%
F0.5 Score95.00%99.95%93.50%
F1 Score94.10%99.85%94.40%
Table 4. Comparison of the current study with previous studies.
Table 4. Comparison of the current study with previous studies.
StudiesYearStorage SystemModelAccuracy
Thornton et al. [46]2017Nanoporous materialsNeural network88.00%
Rahnama et al. [34]2019Metal hydridesBoosted decision tree regression83.00%
Rahnama et al. [35]2019Metal hydridesMulti-class neural network80.00%
Bucior et al. [47]2019Metal organic frameworksMulti-linear regression
with LASSO [48]
96.00%
Ahsan et al.Current workLOHCHSPS-WFML96.40%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ali, A.; Khan, M.A.; Choi, H. Hydrogen Storage Prediction in Dibenzyltoluene as Liquid Organic Hydrogen Carrier Empowered with Weighted Federated Machine Learning. Mathematics 2022, 10, 3846. https://doi.org/10.3390/math10203846

AMA Style

Ali A, Khan MA, Choi H. Hydrogen Storage Prediction in Dibenzyltoluene as Liquid Organic Hydrogen Carrier Empowered with Weighted Federated Machine Learning. Mathematics. 2022; 10(20):3846. https://doi.org/10.3390/math10203846

Chicago/Turabian Style

Ali, Ahsan, Muhammad Adnan Khan, and Hoimyung Choi. 2022. "Hydrogen Storage Prediction in Dibenzyltoluene as Liquid Organic Hydrogen Carrier Empowered with Weighted Federated Machine Learning" Mathematics 10, no. 20: 3846. https://doi.org/10.3390/math10203846

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop