Next Article in Journal
LGCCT: A Light Gated and Crossed Complementation Transformer for Multimodal Speech Emotion Recognition
Next Article in Special Issue
A Novel Deep Transfer Learning Method for Intelligent Fault Diagnosis Based on Variational Mode Decomposition and Efficient Channel Attention
Previous Article in Journal
Generalized Maximum Complex Correntropy Augmented Adaptive IIR Filtering
Previous Article in Special Issue
A Fault Detection Method for Electrohydraulic Switch Machine Based on Oil-Pressure-Signal-Sectionalized Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exposing Deep Representations to a Recurrent Expansion with Multiple Repeats for Fuel Cells Time Series Prognosis

1
Laboratory of Automation and Manufacturing Engineering, University of Batna 2, Batna 05000, Algeria
2
Institut de Recherche Dupuy de Lôme (UMR CNRS 6027), University of Brest, 29238 Brest, France
3
Logistics Engineering College, Shanghai Maritime University, Shanghai 201306, China
4
ISEN Yncréa Ouest, L@bISEN, 29200 Brest, France
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(7), 1009; https://doi.org/10.3390/e24071009
Submission received: 7 June 2022 / Revised: 3 July 2022 / Accepted: 20 July 2022 / Published: 21 July 2022

Abstract

:
The green conversion of proton exchange membrane fuel cells (PEMFCs) has received particular attention in both stationary and transportation applications. However, the poor durability of PEMFC represents a major problem that hampers its commercial application since dynamic operating conditions, including physical deterioration, have a serious impact on the cell performance. Under these circumstances, prognosis and health management (PHM) plays an important role in prolonging durability and preventing damage propagation via the accurate planning of a condition-based maintenance (CBM) schedule. In this specific topic, health deterioration modeling with deep learning (DL) is the widely studied representation learning tool due to its adaptation ability to rapid changes in data complexity and drift. In this context, the present paper proposes an investigation of further deeper representations by exposing DL models themselves to recurrent expansion with multiple repeats. Such a recurrent expansion of DL (REDL) allows new, more meaningful representations to be explored by repeatedly using generated feature maps and responses to create new robust models. The proposed REDL, which is designed to be an adaptive learning algorithm, is tested on a PEMFC deterioration dataset and compared to its deep learning baseline version under time series analysis. Using multiple numeric and visual metrics, the results support the REDL learning scheme by showing promising performances.

1. Introduction

Recent prospective studies have revealed that over the next 30 years, the expected demographic rise in the world’s population will increase energy demand by 50% [1]. As a result, and among the different types of fuel cells, PEMFCs are considered to be the most feasible green energy converter, suitable for both stationary and transportation applications [2]. PEMFC appearing on the schematic diagram of Figure 1a is intended to convert chemical energy into electricity from hydrogen fuel. It consists of an anode, a cathode, and an electrolyte. The electrolyte is a non-conductive material that transmits charges from the cathode to the anode via an external circuit [3]. Generally, a single PEMFC can be considered a lower voltage and higher current electricity generator that ranges from 0.4 V to 0.9 V and from 0.5 A/cm2 to 1 A/cm2, respectively. Therefore, as shown in Figure 1b, several fuel cells must be stacked to satisfy a specific type of application [3]. It is very common for PEMFC to be highly sensitive to dynamic and non-stationary external and internal operating conditions, which could reduce its lifespan [4]. In this context, components material (Figure 1), gas contamination, and the operating conditions imposed by the load profile are responsible for the system durability. Accordingly, a well-structured prognosis policy is needed to extend the lifetime by incorporating the required CBM tasks [5].
Indeed, the remaining useful life (RUL) prediction, which is the estimation of the time between the current and the end of life of a system, is a key element for assessing damage propagation and aging [6,7]. Two particular types of prognosis can be triggered in this case: the short-term prognosis and the long-term prognosis [4,8]. Short-term prognosis is designed to capture local variations with high accuracy, while long-term prognosis aims to give the deterioration trend [8]. Logically, the prediction of local variations will be more accurate in comparison to the large variation of the trend of the life path. For PEMFCs, prognosis can be modeled through three main strategies, namely model-based, filter-based, and data-based approaches [9]. Among data-based approaches, DL tools have been widely investigated. According to [5], the reason for choosing DL tools is motivated by the need to provide a meaningful representation of data patterns originally presented in a very complex feature space (see Figure 2 from [5]). In the context of PEMFCs prognosis, the entire deterioration path can be obtained under real experimental conditions. As a result, data will be “complete” in this case. Additionally, it is common for PEMFCs to be subject to a higher level of dynamic disturbances caused by changing operating conditions leading to more “complex” and “drifted” data. If we project these three characteristics (i.e., data is complete, complex, and drifted) onto the flowchart proposed in the systematic guide of [5] (see Figure 2 of [5]), there would be no more optimal solution than DL for such a case, and conventional machine learning will no longer be recommended. Thus, reconstructing an RUL predictive model requires meeting three main criteria: data availability, complexity, and drift. In this context, to remedy learning issues regarding adaptive learning, dynamic programming is highly recommended to strengthen the learning model. Additionally, to overcome data complexity issues resulting from changing operating conditions, DL is also the main recommended path.
Among recently published papers dealing with DL in the field of performance evaluation of PEMFCs including prognosis (i.e., RUL prediction), learning techniques, which have been derived from leading supervised learning tools such as convolutional neural networks (CNN) [10,11,12,13], long short-term memory (LSTM) [14,15,16], deep belief neural networks (DBNs), and autoencoders (AEs) [17,18] have been extensively investigated. CNNs are generally recommended when trying to separate between different data patterns and make sure they offer a quite enhanced representation. However, CNNs’ learning rules as originally proposed lack the element of adaptive learning. Contrariwise, LSTMs and their variants such as gated units are very popular adaptive learning and time series networks [19]. Unlike CNNs, their robustness resembled each other in considering the mathematical correlation between samples behaving similarly. At the same time, adaptive learning baselines intend to ignore (i.e., forgetting) unwanted learning samples depending on specific parameters known as gates. In the meanwhile, DBNs are totally dependent on the robustness of feature extractions. In this case, unsupervised generative models, such as autoencoders and even adversarial networks, could be involved in fine-tuning DBNs for supervised learning.
Generally speaking, the above-discussed learning models are designed to achieve real-time performance evaluation. More specifically for RUL prediction models, they have been designed either for a single step-ahead or multistep-ahead prediction. These training scenarios have been realized under previously discussed prognosis types including either a long-term or a short-term prognosis. Depending on the type of training rules, these models seemed to be successful regressors in terms of dynamic programming, approximation, or generalization. In this context, the main open challenge, in this case, is to provide a better representation than these DL models to the feature spaces. This is conducted to ensure exploring more meaningful patterns in data, which logically yields better performances.
In this context, the contributions of this paper are threefold:
  • In an attempt to address adaptive learning, the LSTM learning rules that seem to be a perfect choice in this case are selected to train the DL model;
  • To explore new representations learning features, the LSTM network has been exposed to a recurrent expansion process with multiple repeats. This process will not make the LSTM learn from input representation only, but it will also make it able to learn from feature maps and estimated targets. This makes a total of three sources of information, which are repeatedly merged into several LSTMs, leading to a deeper representation than ordinary LSTMs;
  • To evaluate the REDL model, the well-known PHM 2014 data challenge dataset has been employed under a long-term prognosis based on a single step-ahead prediction.
This paper is organized as follows: Section 2 is dedicated to the problem description. In Section 3, we introduce the REDL model and its learning rules. Section 4 is devoted to experiments and results discussion. Section 5 concludes this study with some remarks.

2. Problem Description

Data investigated in this study is provided by the fuel cell laboratory (FCLAB FR CNRS 3539, France, http://eng.fclab.fr/ accessed on 7 June 2022), where it was first introduced at the IEEE PHM data challenge in 2014 [20]. The FCLAB test bench allows performing experimental studies for both ordinary and accelerated degradations, whereas it also allows access to monitoring parameters (e.g., power loads, temperatures, hydrogen and air stoichiometry rates, etc.) and modifying operating conditions. The test bench in Figure 2a is mainly used to study FCs with a maximum power of 1 kW, while operating conditions are depicted in Figure 2b. Gas humidification and the transport of air and hydrogen are ensured by specific independent boilers placed upstream of the FC stack, where only the air boiler is subject to heat control to achieve the desired relative humidity. The hydrogen boiler has an ambient temperature controller, whilst the stack temperature and power supply are controlled by cooling water and a TDI Dynaload active load, respectively. The studied PEMFC stack contains five cells with an active surface of 100 cm2. The PEMFC is constructed with commercial membranes, diffusion layers, and machined flux distribution plates. The PEMFC can reach a nominal current density of the cells equal to 0.70 A/cm2 while the maximum current density is 1 A/cm2.
During this experiment, two long-term durability tests are carried out. The first experiment (FC1) is mainly dedicated to the stationary regime where the operating conditions are considered constant and stable during the whole experiment. The second one (FC2) is mainly focused on the non-stationary regime, where the operating conditions could be subject to a wide range of disturbances. Two types of data were collected, in particular, polarization curves and aging data. Polarization curves are intended to be used for state of health assessment while aging data are used for RUL prediction. In this case, we are interested in aging data as our main goal is to predict RUL.
For FC1 experiment, a complete characterization was achieved every week at times = 0; 48; 185; 348; 515; 658; 823; and 991 h where stationary conditions of a current of 70 A were imposed. First, electrochemical impedance spectroscopy (EIS) was performed only at 0.70 A/cm2 for evaluating the FC state before measuring the polarization. After that, the polarization curve was carried out under a current ramp from 0 A/cm2 to 1 A/cm2 for 1000 s. The air and hydrogen flows were reduced to a current of 20 A where they remained constant at this stage. Finally, the EIS as performed again for constant currents of 0.70 A/cm2, 0.45 A/cm2, and 0.20 A/cm2, respectively, with a stabilization period of 15 min. For the FC2 experiment, the current was subject to dynamic solicitations with a ripple of 70 A, oscillations of 7 A, and a frequency of 5 kHz. Weekly characterization for first polarization curve and EIS, respectively, at times t = 0; 35; 182; 343; 515; 666; 830; and 1016 h were performed.
The collected aging dataset features are defined as follows: Time Aging (time (h)), Single cells and stack voltage (V) (U1–U5 and Utot), Current (A) and current density (A/cm2) (I; J), Inlet and Outlet temperatures of H2 (°C) (TinH2; ToutH2), Inlet and Outlet temperatures of Air (°C) (TinAIR; ToutAIR), Inlet and Outlet temperature of cooling Water (°C) (TinWAT; ToutWAT), Inlet and Outlet Pressure of H2 (mbara) (PinH2; PoutH2), Inlet and Outlet Pressure of Air (mbara) (PinAIR; PoutAIR), Inlet and Outlet flow rate of H2 (l/mn) (DinH2; DoutH2), Inlet and Outlet flow rate of Air (l/mn) (DinAIR; DoutAIR), Flow rate of cooling water (l/mn) (DWAT), Inlet Hygrometry (Air) (%) (HrAIRFC). In this case, when considering aging data for RUL prediction, the PHM 2014 main goal was to use the dataset generated from FC1 to reconstruct the data-driven model. After that, the model will be tested on a portion of FC2 experiment, whereas a final step of validation will be performed for long-term predictions using predicted samples themselves.
In our work, among a set of previously stated features, the fuel cell voltage, which is a commonly used static health indicator at the cell/battery level, was selected to conduct our time series prognosis experiments. In this context, and since data collection with sensors is subject to outliers, a process of filtration and outliers removal is mandatory. In this case, Figure 3 is introduced to give a better explanation of the RUL prediction problem. In Figure 3a, which represents FC1 case study, the entire PEMFC deterioration patterns are given while only 500 h are revealed for FC2. By comparing FC2 and FC1 voltage behavior, it seems that FC2 is subject to outliers and dynamic disturbance more than FC1. Therefore, a second filtered version of the signal is necessary. In this case, an average window filtering and outlier removers are involved to produce the clean version of the signals as appeared in the figure. In our study, we have set the failure threshold at 800 h as a default value, which reflects about 10% of the voltage drop. It should be mentioned that identifying the failure threshold for PEMFC is still an open question. However, in this situation, the 10% voltage drop is inspired by the United States Department of Energy policy for vehicle applications [1]. In FC2, deterioration patterns from 500 to the failure threshold represent the main challenge of long-term prognosis in this case.
It should be mentioned that the linear degradation path presented in Figure 3b is not obtained by time series predictions based on nonlinear degradation signals. In fact, the time record is used as the inputs of the linear regression model while the filtered signal is the output. This should logically not be the standard way for nonlinear prediction as in our case because time has nothing to do with the cell deterioration. It is, in fact, the operating conditions that have a real effect on the cell as time evolves. However, in our case, the main goal was to create a reference to our predictions so that at least we could assess how far the predictions approximately were away from the degradation trend, especially in the challenge part where the actual data (i.e., truth labels) were not revealed.
Accordingly, FC1 data were used to train the prediction model for single-step ahead prediction. While doing so, the FC2 data was used to test the model accuracy. Finally, the challenge part was the prediction using predicted samples from FC2 as an input to the model. To be able to make this work on a time series analysis, a sliding window with a size of 50 samples and an overlap of 30 samples was adopted in this work. Figure 4 clearly indicates the process of data inputs and targets collection for the training (Figure 4a), testing, and challenge parts (Figure 4b).
A very interesting point to be taken into account is that in such particular time series analysis, min–max normalization will no longer be recommended. Especially under single step ahead prediction, samples actual meaning could be easily distorted. Therefore, normalization based on the mean and variance of the entire time frame at each sliding is important and has more significance. We introduce the formula one as the main scaling method recommended in this case. x and x are original and normalized inputs of a single time frame, respectively, while μ and δ refer to mean and standard deviation of the entire training, testing, or challenge dataset.
x = x μ δ

3. Recurrent Expansion of Deep Learning

This section is introduced to describe the proposed REDL algorithm and its main learning rules. Figure 5 indicates that training a deep network with REDL rules should be performed in accordance with the following steps.
Step 1: A deep network must be used to train an approximation function f k in (2) with deep representations φ k x k learned from inputs x according to a specific loss function l k in (3). In this case, φ k x k could be any sort of feature mapping resulting from training the deep network such as LSTM layers, convolutional mappings, encoded layers of any type of autoencoder, hidden layers from a deep belief network, etc.
f k = φ k x k
l k = l o s s k ( y , y ˜ k )
Step 2: The entire deep network including x , φ k x k , and estimated targets y ˜ will be fused in a sort of concatenation as in (4) to train the same type of model repeatedly. In this case, x k + 1 represents the new inputs to the next training network. k denotes the number of rounds used to repeat the recurrent expansion process.
x k + 1 = [ x k , φ k x k , y ˜ k ]
Step 3: It should be mentioned that the combination in Equation (4) will be extremely huge due to the large layers of the deep network. Moreover, feature maps and estimated targets will no longer follow the same normalization procedures of inputs. Hence, the dimensionality reduction and renormalization of the entire collection is a very important task in this case.
Step 4: Stopping criteria, in this case, will be evaluated by different quality approximation metrics depending on the type of the application (i.e., classification or regression). Moreover, the loss function behaviors such as convergence speed and stabilization can be also used to monitor the process of multiple repeat training.
Since the RUL prediction problems are regression ones, well-known quality measuring metrics such as root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) expressed by Formulas (5)–(7), are used in this work. Additionally, for the loss function l k , we use also RMSE as the main approximation error. As a result, the area under loss curve (AULC) is adopted to indicate the performance of the loss function (8). n in this case refers to the number of training, testing, or challenge samples. m designates the maximum number of epochs.
R M S E k = 1 n i = 1 n y ˜ k i y k i 2
MAE k = 1 n i = 1 n y ˜ k i y k i
M A P E k = 1 n i = 1 n y ˜ k i y k i y k i
A U L C k = 0 m l k d x
Another interesting scoring metric suggested initially by the PHM 2014 benchmark developers is also proposed. The score functions S k in Equation (9) are used to measure the precision of the prediction model by considering early and late predictions’ effects on maintenance planning. Early predictions have an impact on consuming CBM resources, while later predictions could result in many damages including financial and human losses. E k is a percentage error, a = 0.5 is a constant, and b is a penalization parameter given differently for each type of prediction (i.e., b = 5 for late predictions and b = 20 for early ones).
S k = e l n a E k / b
E k = 100 %   y k i y ˜ k i y k i
Based on our aforementioned remark regarding the alteration of PEMFC owing to dynamism in operating conditions, the single-layer LSTM was adopted as the main deep architecture to train the REDL model. Accordingly, LSTM layers will represent φ k x k and its estimated targetsin each round will be expressed by y ˜ k .
For a better understanding of the followed training procedures, the pseudo-code of Algorithm 1 gives more explanations about REDL training rules.
Algorithm 1. REDL algorithm
Inputs:  x , y , k , m , n
Outputs:  y ˜ k
For j = 1 : k
% Train the deep network and evaluate the loss function
f k = φ k x k
l k = l o s s k ( y , y ˜ k )
% Rebuild the new inputs
x k + 1 = [ x k , φ k x k , y ˜ k ]
% Evaluate training metrics
R M S E k = 1 n i = 1 n y ˜ k i y k i 2
M A E k = 1 n i = 1 n y ˜ k i y k i
M A P E k = 1 n i = 1 n y ˜ k i y k i y k i
A U L C k = 0 m l k d x
S k = e l n a E k / b
End (For)

4. Results and Discussion

Prognosis model reconstruction experiments were carried out in MATLAB r2018b environment while computational resources involved a Personal Computer (PC) with an i7 microprocessor, 16 GB of RAM memory, and a 12 MB of cache RAM. A single layer LSTM network with 16 neurons, m a x i m u m   e p o c h s = 300 , m = 200 , m i n i b a t c h   s i z e = 150 , ADAM optimizer, i n i t i a l   l e a r n i n g   r a t e = 0.1 , g r a d i e n t   t h r e s h o l d = 1 , and L 2   regularization   parameter = 0.01 was trained according to REDL rules. Since our goal, in this case, was to study the effect of REDL rules in improving DL representation for RUL prediction of PEMFC, these parameters were manually tuned according to expert decisions and then fixed for the entire procedure. Additionally, a random seed of learning weights generation was fixed to a single source to make sure that expressed variations in outcomes only resulted from REDL rules and not from other sources. In this context, the LSTM as trained for k = 20 rounds with REDL rules.
To evaluate our proposed model, all previously discussed metrics for the entire three subsets including, training, testing, and challenge subsets were recorded in each round. Exceptions were made for the loss function curves and AULC metrics, which were basically related to training only. Figure 6a elucidates some loss curves during those rounds, while Figure 6b illustrates the AULC parameter. Figure 6a shows that in each round from rounds 1 to 16, the loss curves demonstrate less fluctuations, better convergence, and stabilization performances. Figure 6b also reveals the same behavior by highlighting an obvious AULC reduction after each round until a certain threshold. In this case, it can be stated that REDL helps in discovering new representations from each previously trained model. These representations are considered very important until they start to vanish. At this point, unrolling the model several times becomes meaningless to the learning model.
The curves of Figure 7 are also devoted to addressing other metrics in the entire model training and evaluation phases. We can observe significant enhancements in each training round. Exceptions also are made for some rounds, such as in rounds 17 and 18 (from Figure 7a–c). The same previous explanation associated with Figure 6 can be applied here to justify such a phenomenon. Concerning Figure 7d related to the challenge part, the model expresses great performances, which are most important in this case because the main goal is to achieve a long-term prognosis that is as accurate as possible.
When it comes to prediction illustrations, REDL curves in different rounds are showcased in Figure 8. It also explains the same previous conclusions where the model in each round clearly reflects a better curve fit. In this case, the proposed score function can be applied as the best way to evaluate the precision of the predictions.
Figure 9 depicts score values distribution according to the percentage error at the last round. During training and testing phase (Figure 9a,b), the model discloses a high precision when it approaches maximum value one. Moreover, we observe in this case a sort of calibration between early (i.e., E > 0 ) and late predictions (i.e., E < 0 ). Additionally, according to Table 1, the long-term predictions (Figure 9c) are late predictions with a lower precision than the testing phase with about 30%.
Table 1 addresses all obtained results at different phases of model training and the evaluation as well. It is provided that the higher performances of the model are obtained at the last rounds of each phase. Despite the difficulty of obtaining a convincing explanation, it can be asserted that the new training sources including feature maps, and estimated targets from different models have an important role in representation learning compared to a single source of inputs.

5. Conclusions

In this paper, we proposed the REDL algorithm as a novel approach for training deep networks when applied to time series prognosis of PEMFCs. The REDL approach is based on improving representation learning features by involving a recurrent expansion rule. The REDL encompasses fusing the entire deep networks including inputs, feature mappings, and estimated targets into other deep learning models with the same architectures for multiple repeats. Different types of visual and numerical metrics have been used during the learning and evaluation processes. A 20-roundcase study was conducted to prove the superiority of the designed learning scheme. The findings have demonstrated the ability of REDL in providing a meaningful feature space in each new round. Compared to relevant studies in the field, REDL algorithms brought more insights into deeper feature space than the aforementioned standard deep networks (i.e., CNNs, LSTMs, DBNs, etc.). Therefore, we expect that these developments will contribute to the shaping of a new era of representation learning and feature mapping. Additionally, regarding future prospects, the model, in this case, follows expert knowledge when giving a decision on the AULC and determining stopping criteria. So, one of the main directions will be addressing the topic of automatic stopping criteria decision-making to make the model more user-friendly. Additionally, REDL rules suffer from difficulties in finding the appropriate initial mappings allowing better AULC convergence. In this case, it is worth seeking better strategies to initialize the REDL for first rounds.

Author Contributions

Conceptualization, T.B. (Tarek Berghout) and M.B.; methodology, T.B. (Tarek Berghout) and M.B.; software, T.B. (Tarek Berghout); validation, T.B. (Tarek Berghout), M.B, T.B. (Toufik Bentrcia), Y.A. and L.-H.M.; formal analysis, T.B. (Tarek Berghout), M.B., T.B. (Toufik Bentrcia), Y.A. and L.-H.M.; investigation, T.B (Tarek Berghout). and M.B.; data curation, T.B. (Tarek Berghout); writing—original draft preparation; writing—review and editing, T.B. (Tarek Berghout), M.B, T.B. (Toufik Bentrcia), Y.A. and L.-H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hua, Z.; Zheng, Z.; Pahon, E.; Péra, M.-C.; Gao, F. A review on lifetime prediction of proton exchange membrane fuel cells system. J. Power Sources 2022, 529, 231256. [Google Scholar] [CrossRef]
  2. Zhang, D.; Cadet, C.; Yousfi-Steiner, N.; Druart, F.; Bérenguer, C. PHM-oriented Degradation Indicators for Batteries and Fuel Cells. Fuel Cells 2017, 17, 268–276. [Google Scholar] [CrossRef]
  3. Sutharssan, T.; Montalvao, D.; Chen, Y.K.; Wang, W.C.; Pisac, C.; Elemara, H. A review on prognostics and health monitoring of proton exchange membrane fuel cell. Renew. Sustain. Energy Rev. 2017, 75, 440–450. [Google Scholar] [CrossRef] [Green Version]
  4. Jacome, A.; Hissel, D.; Heiries, V.; Gerard, M.; Rosini, S. Prognostic methods for proton exchange membrane fuel cell under automotive load cycling: A review. IET Electr. Syst. Transp. 2020, 10, 369–375. [Google Scholar] [CrossRef]
  5. Berghout, T.; Benbouzid, M. A Systematic Guide for Predicting Remaining Useful Life with Machine Learning. Electronics 2022, 11, 1125. [Google Scholar] [CrossRef]
  6. Berghout, T.; Mouss, L.; Kadri, O.; Saïdi, L.; Benbouzid, M. Aircraft Engines Remaining Useful Life Prediction with an Improved Online Sequential Extreme Learning Machine. Appl. Sci. 2020, 10, 1062. [Google Scholar] [CrossRef] [Green Version]
  7. Berghout, T.; Mouss, L.H.; Kadri, O.; Saïdi, L.; Benbouzid, M. Aircraft engines Remaining Useful Life prediction with an adaptive denoising online sequential Extreme Learning Machine. Eng. Appl. Artif. Intell. 2020, 96, 103936. [Google Scholar] [CrossRef]
  8. Zhang, Z.; Wang, Y.X.; He, H.; Sun, F. A short- and long-term prognostic associating with remaining useful life estimation for proton exchange membrane fuel cell. Appl. Energy 2021, 304, 117841. [Google Scholar] [CrossRef]
  9. Lin, R.H.; Xi, X.N.; Wang, P.N.; Wu, B.D.; Tian, S.M. Review on hydrogen fuel cell condition monitoring and prediction methods. Int. J. Hydrogen Energy 2019, 44, 5488–5498. [Google Scholar] [CrossRef]
  10. Pan, M.; Hu, P.; Gao, R.; Liang, K. Multistep prediction of remaining useful life of proton exchange membrane fuel cell based on temporal convolutional network. Int. J. Green Energy 2022, 1–15. [Google Scholar] [CrossRef]
  11. Zhou, S.; Lu, Y.; Bao, D.; Wang, K.; Shan, J.; Hou, Z. Real-time data-driven fault diagnosis of proton exchange membrane fuel cell system based on binary encoding convolutional neural network. Int. J. Hydrogen Energy 2022, 47, 10976–10989. [Google Scholar] [CrossRef]
  12. Zhang, X.; Guo, X. Fault diagnosis of proton exchange membrane fuel cell system of tram based on information fusion and deep learning. Int. J. Hydrogen Energy 2021, 46, 30828–30840. [Google Scholar] [CrossRef]
  13. Benaggoune, K.; Yue, M.; Jemei, S.; Zerhouni, N. A data-driven method for multi-step-ahead prediction and long-term prognostics of proton exchange membrane fuel cell. Appl. Energy 2022, 313, 118835. [Google Scholar] [CrossRef]
  14. Wang, C.; Li, Z.; Outbib, R.; Dou, M.; Zhao, D. A novel long short-term memory networks-based data-driven prognostic strategy for proton exchange membrane fuel cells. Int. J. Hydrogen Energy 2022, 47, 10395–10408. [Google Scholar] [CrossRef]
  15. Ma, J.; Liu, X.; Zou, X.; Yue, M.; Shang, P.; Kang, L.; Jemei, S.; Lu, C.; Ding, Y.; Zerhouni, N.; et al. Degradation prognosis for proton exchange membrane fuel cell based on hybrid transfer learning and intercell differences. ISA Trans. 2021, 113, 149–165. [Google Scholar] [CrossRef] [PubMed]
  16. Ma, T.; Xu, J.; Li, R.; Yao, N.; Yang, Y. Online Short-Term Remaining Useful Life Prediction of Fuel Cell Vehicles Based on Cloud System. Energies 2021, 14, 2806. [Google Scholar] [CrossRef]
  17. Li, H.-W.; Xu, B.-S.; Du, C.-H.; Yang, Y. Performance prediction and power density maximization of a proton exchange membrane fuel cell based on deep belief network. J. Power Sources 2020, 461, 228154. [Google Scholar] [CrossRef]
  18. Meraghni, S.; Terrissa, L.S.; Yue, M.; Ma, J.; Jemei, S.; Zerhouni, N. A data-driven digital-twin prognostics method for proton exchange membrane fuel cell remaining useful life prediction. Int. J. Hydrogen Energy 2021, 46, 2555–2564. [Google Scholar] [CrossRef]
  19. Long, B.; Wu, K.; Li, P.; Li, M. A Novel Remaining Useful Life Prediction Method for Hydrogen Fuel Cells Based on the Gated Recurrent Unit Neural Network. Appl. Sci. 2022, 12, 432. [Google Scholar] [CrossRef]
  20. Fabien, H. IEEE PHM Data Challenge 2014. Fuel Cell Lab UAR 2200 2021. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of PEMFC stack and operating principle: (a) operating principle of a PEMFC. Reprinted with permission from Ref. [2], Wiley Online Library: 2017; (b) PEMFCs stack structure. Reprinted with permission from Ref. [3], Elsevier: 2017.
Figure 1. Schematic diagram of PEMFC stack and operating principle: (a) operating principle of a PEMFC. Reprinted with permission from Ref. [2], Wiley Online Library: 2017; (b) PEMFCs stack structure. Reprinted with permission from Ref. [3], Elsevier: 2017.
Entropy 24 01009 g001
Figure 2. FCLAB test bench and operating conditions: (a) FCLAB test bench. Reprinted with permission from Ref. [19], MDPI: 2022; (b) operating conditions limits.
Figure 2. FCLAB test bench and operating conditions: (a) FCLAB test bench. Reprinted with permission from Ref. [19], MDPI: 2022; (b) operating conditions limits.
Entropy 24 01009 g002
Figure 3. Illustration of the prediction problems main features: (a) raw and prepared version of FC1 voltage data for training; (b) raw and prepared version of FC2 data for testing and main challenge.
Figure 3. Illustration of the prediction problems main features: (a) raw and prepared version of FC1 voltage data for training; (b) raw and prepared version of FC2 data for testing and main challenge.
Entropy 24 01009 g003
Figure 4. Sliding a time window for samples collection: (a) collecting training samples; (b) collecting samples of both testing and challenge part.
Figure 4. Sliding a time window for samples collection: (a) collecting training samples; (b) collecting samples of both testing and challenge part.
Entropy 24 01009 g004
Figure 5. Schematic diagram illustrating REDL algorithm architecture.
Figure 5. Schematic diagram illustrating REDL algorithm architecture.
Entropy 24 01009 g005
Figure 6. REDL convergence features: (a) loss curves during different training rounds; (b) AULC values during entire training process.
Figure 6. REDL convergence features: (a) loss curves during different training rounds; (b) AULC values during entire training process.
Entropy 24 01009 g006
Figure 7. REDL model performances during both training and evaluation: (a) REDL training performances in each round; (b) REDL testing performances in each round; (c) REDL performance at challenge part; (d) score behavior during testing phase.
Figure 7. REDL model performances during both training and evaluation: (a) REDL training performances in each round; (b) REDL testing performances in each round; (c) REDL performance at challenge part; (d) score behavior during testing phase.
Entropy 24 01009 g007
Figure 8. Curve fit results in different rounds: (a) REDL curve fit in training phase; (b) REDL testing phase; (c) REDL long-term predictions.
Figure 8. Curve fit results in different rounds: (a) REDL curve fit in training phase; (b) REDL testing phase; (c) REDL long-term predictions.
Entropy 24 01009 g008
Figure 9. Scoring results in last round: (a) scoring results in training phase; (b) scoring results in testing phase; (c) scoring results in long-term predictions phase.
Figure 9. Scoring results in last round: (a) scoring results in training phase; (b) scoring results in testing phase; (c) scoring results in long-term predictions phase.
Entropy 24 01009 g009
Table 1. REDL training and evaluation performances for 20 rounds.
Table 1. REDL training and evaluation performances for 20 rounds.
RoundsTrainingTestingChallenge
RMSEMAPEMAEScoreRMSEMAPEMAEScoreRMSEMAPEMAEScore
10.0331593300.0237227190.00728215420.920026900.0567321370.0497197920.0153709940.819368480.156837310.127339300.0410955210.63273019
20.0306367180.0271807610.00832146690.916270260.0446857470.0417811130.0128952460.846175430.161619140.130949150.0422487070.62413299
30.0296120460.0273714620.00837454760.905884620.0488422210.0467120780.0144054400.825509790.168312300.136988620.0441949220.60835212
40.0265358710.0260799150.00797318950.918257240.0412897690.0393622550.0121271220.851500870.164773850.132389070.0426941960.62304616
50.0245374370.0226790420.00693106600.931516050.0388819020.0361233500.0111244810.863241790.162017900.131708220.0424868870.62316668
60.0331655780.0313306640.00957550850.922741290.0442384630.0414893070.0127833800.862613620.159492110.126085980.0406259860.64582580
70.0327481550.0308963100.00944249700.924428640.0437808860.0411231480.0126710570.863816020.157239030.124118780.0399925370.65063918
80.0346741190.0329178420.0100609410.916998740.0460321940.0435064880.0134083890.855893850.155970440.123812250.0398990290.65067977
90.0368122910.0344909430.0105412670.915387030.0452302690.0421568790.0129947630.862433910.155624110.123883870.0399267900.64989793
100.0384490790.0367961270.0112481150.925469280.0387990100.0342317890.0105583120.897247790.153544080.118717690.0382406380.66631281
110.0302280080.0286514390.00875933840.933714150.0336810500.0283629170.00874107980.899275360.155255240.121722270.0392381920.65361106
120.0321578680.0294931380.00901749730.930953860.0351475700.0291369070.00898026670.897423210.155393560.122358740.0394483390.65135765
130.0284467900.0257410930.00787105600.958744110.0230569340.0164419970.00506492840.944455150.149349750.113641020.0366109160.67830050
140.0290170910.0258527660.00790710840.958635090.0249638990.0167945920.00517284170.943941650.147944810.112614260.0362732930.68181121
150.0347672140.0317896530.00972225700.919227540.0393432160.0330027900.0101683880.882784960.163273390.131697540.0424225960.63577491
160.0341608080.0310820430.00950434150.909202870.0466938610.0407168570.0125436680.853463590.169322820.138845650.0447481760.61368066
170.0430397090.0408044200.0124778240.913843930.0454761680.0395542720.0121908950.883057180.159655360.127948190.0411640670.65377641
180.0404913570.0382206140.0116868400.916535560.0470377770.0414979270.0127895620.874699770.159469260.130376650.0419663820.64416891
190.0350893440.0341378900.0104366570.947001930.0387055460.0360224920.0111006620.904338840.143384170.112359380.0361453850.69007659
200.0111561230.0105999430.00323845450.988845470.0173786840.0159311160.00490017930.936058820.146082980.116538410.0375835600.66352206
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Berghout, T.; Benbouzid, M.; Bentrcia, T.; Amirat, Y.; Mouss, L.-H. Exposing Deep Representations to a Recurrent Expansion with Multiple Repeats for Fuel Cells Time Series Prognosis. Entropy 2022, 24, 1009. https://doi.org/10.3390/e24071009

AMA Style

Berghout T, Benbouzid M, Bentrcia T, Amirat Y, Mouss L-H. Exposing Deep Representations to a Recurrent Expansion with Multiple Repeats for Fuel Cells Time Series Prognosis. Entropy. 2022; 24(7):1009. https://doi.org/10.3390/e24071009

Chicago/Turabian Style

Berghout, Tarek, Mohamed Benbouzid, Toufik Bentrcia, Yassine Amirat, and Leïla-Hayet Mouss. 2022. "Exposing Deep Representations to a Recurrent Expansion with Multiple Repeats for Fuel Cells Time Series Prognosis" Entropy 24, no. 7: 1009. https://doi.org/10.3390/e24071009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop