# Personalized Transfer Learning Framework for Remaining Useful Life Prediction Using Adaptive Deconstruction and Dynamic Weight Informer

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

- (1)
- Although data in the health stage are not necessary for prediction modeling, the onset times of the engine degradation stage vary widely and can be difficult to identify. However, most existing methods do not consider changes in the engine operating stage and start transfer prediction in the health stage, thus resulting in the insufficient extraction of degradation information.
- (2)
- Engine data exhibit variations in degradation degrees, degradation rates, and initial health states, thereby forming a multi-source domain. However, existing methods either directly transfer all full-life engine data or simply filter source domain data, disregarding the transferability differences and making it challenging to extract sufficient information from the source domain.
- (3)
- Significant differences between target engines lead to the insufficient adaptability of the general transfer prediction model. The transferability of multi-source domains is different among different target engines. However, most methods do not consider the individual differences among the target engines and build general transfer prediction models, resulting in insufficient fitting for the degradation processes of target engines.

- (1)
- We proposed a personalized transfer learning framework for RUL prediction of the same type of turbofan engines with individual differences in performance degradation trajectories caused by degradation degrees, degradation rates, and initial health states. In contrast to most of the common transfer learning methods for prediction, which indiscriminately use the whole multi-source-domain data and the entire trajectories, it was specifically and step-by-step designed by answering three key questions: when, what, and how to transfer in prediction modeling of engines. In this manner, multi-source-domain data were as maximally utilized as possible according to the differential degradation process and their characteristics to improve the training data quantity and quality of the prediction models.
- (2)
- A transfer-time identification method based on dual-baseline performance assessment and the Wasserstein distance was designed to eliminate the worthless part of a trajectory for transfer and prediction modeling. The DBA-WD method combines two representations of engine health and failure based on linear weight fusion, thereby preventing endpoint errors caused by reverse distribution. Utilizing the fused performance index, we identified the transfer timing of engine degradation by calculating its slope.
- (3)
- An adaptive deconstruction method was proposed to solve the issue of transferability differences in the multi-source domain caused by the individual differences among engines. The transferability of each sample in the multi-source domain was measured by the TL-EDM approach, and then the source domain was ranked and adaptively deconstructed into two parts according to transferability. This approach measured transferability by considering three aspects, degradation degrees, degradation rates, and initial health degrees, and was especially suitable for the engine characteristic of slowly evolving degradation processes.
- (4)
- We designed a new training loss function considering the transferability and a two-stage transfer learning scheme and introduced them into the informer-based RUL prediction model, which has a great advantage in long-time-series prediction. The dynamic training weight loss function can incorporate the transferability information measured by TL-EDM into the prediction process. The two-stage transfer learning scheme facilitates efficient extraction and integration of the general and individual degradation information of engines. We validated the proposed scheme with the C-MAPSS dataset. The results demonstrated that our method outperforms existing deep-learning techniques in RUL prediction.

## 2. Methods

#### 2.1. Problem Statement and Proposed Framework

#### 2.2. Transfer-Timing Identification Based on DBA-WD

#### 2.2.1. Performance Assessment Baseline Screening

_{d}cycles form the health baseline sets, ${G}_{h}\in {\mathbb{R}}^{wi{n}_{d}\times P\times M}$, and fault baseline sets, ${G}_{f}\in {\mathbb{R}}^{wi{n}_{d}\times P\times M}$.

#### 2.2.2. Dual-Baseline Performance Assessment Based on Wasserstein Distance

_{h}and G

_{f}.

_{h}or G

_{f}is calculated as follows:

_{WD}(·) represents the WD calculation. Further, the arithmetic mean value of the WD is used to represent d

_{h}and d

_{f}.

_{h}and d

_{f}into the HI of the engine. The fusion equation is expressed as follows:

_{f}and W

_{h}denote the health and fault baseline weights, respectively. And HI denotes the health index.

#### 2.2.3. Transfer-Timing Identification of Engine Performance Degradation

_{HI}. Fitting to the HI within the window is performed. The fitting equation is expressed as follows:

_{i}is the slope in the i-th window. b is the intercept term. a

_{i}is defined as a degradation-sensitive feature. When a

_{i}is close to 0, the engine is relatively healthy, whereas when a

_{i}is less than 0, the engine has started to degrade.

_{a}set, the target engine is deemed as having degraded. In general, the threshold is slightly less than zero to avoid misjudgments due to small fluctuations in HI. When the cycle is characterized by a degradation-sensitive feature below Thre

_{a}, the degradation stage starts.

#### 2.3. Multi-Source-Domain Adaptive Deconstruction Based on TL-EDM

#### 2.3.1. Transferability Measurement Using Time-Lag Ensemble Distance

_{d-up}and Ther

_{d-low}represent the upper and lower limits of the distance weights, respectively, when Ther

_{d-up}+ Ther

_{d-low}= 1. L

_{max}and L

_{min}are the maximum and minimum values of the weight range. The weights of W

_{cos}and W

_{WD}can be adaptively determined for different target engines. A weighted ensemble distance is proposed to measure the transferability of the degradation trajectory.

_{cos}and W

_{WD}are the weights of the cosine distance and WD, respectively. f

_{cos}(·) and f

_{WD}(·) represent calculations for the cosine distance and WD distance, respectively. d

_{t}is the transferable distance. $\circ $ represents Hadamard product. The transferable distance set is denoted as $D=\left\{{d}_{1},{d}_{2},\dots {d}_{t}\dots {d}_{N-L}\right\}$.

#### 2.3.2. Multi-Source-Domain Adaptive Deconstruction with Transferability

- (1)
- Pre-selection based on transferability

_{s}engines are selected as the pre-selection set.

- (2)
- Secondary selection based on RUL labels

#### 2.4. Personalized Transfer Prediction Based on Dynamic-Weight Informer Model

#### 2.4.1. Two-Stage Transfer Learning Prediction Scheme

- (1)
- First transfer stage: constructing and pre-training the informer prediction model

- (2)
- Secondary transfer stage: dynamic weight retraining

- Setting the training data weights with transferability

- Retraining of individual layers of personalized model

- (3)
- Target-engine RUL prediction using the personalized transfer learning method

#### 2.4.2. Dynamic-Weight Informer Prediction Model

- (1)
- Position encoding: In this study, we choose to utilize absolute positional coding to locate each element in the time series [29]. The inputs are denoted as $X=\left\{{x}_{1},{x}_{2}^{},\dots ,{x}_{i}^{},\dots ,{x}_{N}^{}\right\}$, where N denotes the length of time series. By position encoding $X$, we can obtain $PE=\left\{p{e}_{1},p{e}_{2},\dots ,p{e}_{N}\right\}$. The input of the model is $input=\left\{{x}_{1}+p{e}_{1},{x}_{2}+p{e}_{2},\dots ,{x}_{N}+p{e}_{N}\right\}$.$$p{e}_{(i,2j)}=\mathrm{sin}\left(\frac{i}{10,{000}^{\frac{2j}{P}}}\right)$$$$p{e}_{(i,2j+1)}=\mathrm{cos}\left(\frac{i}{10,{000}^{\frac{2j+1}{P}}}\right)$$
- (2)
- ProbSparse self-attention mechanism: The attention mechanism is a fundamental component of the transformer model, enabling it to extract essential information from extensive datasets. In time series analysis, the focus is on feature extraction for prediction tasks. For input matrix X, it is transformed into matrix Q (Query), K (Key), and V (Value) by different weight matrices. The ProbSparse self-attention mechanism aims to identify important sparse queries to optimize calculation efficiency. For $Q\in {R}^{m\times \mathrm{d}}$, $K\in {R}^{n\times \mathrm{d}}$, and $V\in {R}^{n\times \mathrm{d}}$, the specific steps are as follows:
- By sampling K, we can obtain k
_{k}. Sampling length is L_{k}. For each ${q}_{i}\in Q$, M is calculated as follows [29]:$$M\left({q}_{i},K\right)=\mathrm{max}\left(\frac{{q}_{i}{k}_{k}^{T}}{\sqrt{{d}_{k}}}\right)-\frac{1}{{L}_{k}}\Sigma \left(\frac{{q}_{i}{k}_{k}^{T}}{\sqrt{{d}_{k}}}\right)$$ - Compute the u queries, q, with the greatest M values, and assemble them into a new matrix denoted as $\overline{Q}$.
- Calculate $Attention(Q,K,V)=\mathrm{softmax}\left(\frac{{\overline{Q}}^{T}T}{\sqrt{{d}_{k}}}\right)V$. Attention extraction identifies feature data with high information quality and strong performance expression capabilities, thus improving the model prediction accuracy.

- (3)
- Residual and normalization

## 3. Case Study

#### 3.1. Data Description

- (1)
- Training dataset: full-life monitoring parameter data on degradation processes for 100 engines;
- (2)
- Testing dataset: randomly intercepted data on 100 test engines (target engines), in terms of monitoring parameters and their RULs.

#### 3.2. Performance Assessment and Transfer-Timing Identification

_{HI}as its width is a degradation-sensitive feature. Here, Thre

_{a}was used as the discriminant condition to identify when the degradation-sensitive feature exceeded the threshold, which is the transfer timing. Numerous previously published works defined the last 130 cycles of engine operation as the degradation stage, while categorizing the preceding cycles as the healthy stage [36]. In our study, we conducted a statistical analysis on the healthy stage data within the FD001 training dataset, revealing that the average standard deviation of the degradation-sensitive feature during this phase is 0.0007624. Consequently, we set Thre

_{a}to 0.00075, which is close to the value. Using the No. 3 target engine as an example, the results of the degradation stage and transfer-timing identification are shown in Figure 8.

_{a}, indicating that the engine entered the degeneration stage. The health statement of the target engine exhibited a slight and stable fluctuation during the early stage of degradation, with the corresponding degradation-sensitive feature fluctuating around 0 within a range of 0.0005–0.001. Therefore, Thre

_{a}was set to 0.00075.

#### 3.3. Transferability Measurement and Multi-Source-Domain Adaptive Deconstruction

_{s}, was set to 10. The secondary selection ratio, r, was then set to 1.5. Thus, the source domains with RUL* in the interval [μ − 1.5σ, μ + 1.5σ] were identified as the high-transferability source domains. With respect to Ther

_{d-up}and Ther

_{d-low}, as shown in Equations (13) and (14), the distance weight was partitioned into three equal segments. Accordingly, Ther

_{d-up}and Ther

_{d-low}were configured as 1/3 and 2/3, respectively. The multi-source-domain adaptive deconstruction results for the No. 3 target engine are shown in Figure 11.

#### 3.4. RUL Prediction Using the Personalized Transfer Learning

#### 3.4.1. Construction of the Informer-Based RUL Prediction Model

#### 3.4.2. Personalized Transfer Learning for RUL Prediction Model Using DPCs

#### 3.5. Comparative Analysis of the Prediction Results

#### 3.5.1. Evaluation Index

- Prediction error

- Score

- Relative accuracy

- RMSE

#### 3.5.2. Comparison of Source-Domain Numbers

#### 3.5.3. Methods Comparison

- Comparative experiment without identifying transfer timing

- Comparative experiment without using general model transfer

- Comparative experiment without setting training data weight as transferability

- Comparative experiment without using transfer learning scheme

- Comparative experiment with commonly used deep-learning prediction methods

## 4. Discussion

- (1)
- When to transfer: dual-baseline performance assessment can accurately identify the performance degradation stage of the target engine, which is the key premise for excluding unnecessary health-stage data.

- (2)
- What to transfer: the multi-source-domain adaptive deconstruction based on TL-EDM can effectively mine high-transferability source domains, thus balancing transferable data quantity and information quality.

- (3)
- How to transfer: the personalized transfer learning framework effectively utilizes the general and individual information from multi-source domains of the same type engines, thereby improving the individualization and accuracy of the transfer prediction model for each target engine.

## 5. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Kayid, M.; Alshagrawi, L.; Shrahili, M. Stochastic Ordering Results on Implied Lifetime Distributions under a Specific Degradation Model. Axioms
**2023**, 12, 786. [Google Scholar] [CrossRef] - Xue, B.; Xu, H.; Huang, X.; Zhu, K.; Xu, Z.; Pei, H. Similarity-based prediction method for machinery remaining useful life: A review. Int. J. Adv. Manuf. Technol.
**2022**, 121, 1501–1531. [Google Scholar] [CrossRef] - Ahmadzadeh, F.; Lundberg, J. Remaining useful life estimation: Review. Int. J. Syst. Assur. Eng. Manag.
**2014**, 5, 461–474. [Google Scholar] [CrossRef] - Camci, F.; Chinnam, R.B. Health-state estimation and prognostics in machining processes. IEEE. Trans. Autom. Sci. Eng.
**2010**, 7, 581–597. [Google Scholar] [CrossRef] - Askari, B.; Bozza, A.; Cavone, G.; Carli, R.; Dotoli, M. An Adaptive Constrained Clustering Approach for Real-Time Fault Detection of Industrial Systems. Eur. J. Control
**2023**, 100858. [Google Scholar] [CrossRef] - Atrigna, M.; Buonanno, A.; Carli, R.; Cavone, G.; Scarabaggio, P.; Valenti, M.; Graditi, G.; Dotoli, M. A Machine Learning Approach to Fault Prediction of Power Distribution Grids under Heatwaves. IEEE Trans. Ind. Appl.
**2023**, 59, 4835–4845. [Google Scholar] [CrossRef] - Wang, Y.; Zhao, Y.; Addepalli, S. Remaining useful life prediction using deep learning approaches: A review. Procedia Manuf.
**2020**, 49, 81–88. [Google Scholar] [CrossRef] - Wang, Y.; Zhao, Y. Multi-Scale Remaining Useful Life Prediction Using Long Short-Term Memory. Sustainability
**2022**, 14, 15667. [Google Scholar] [CrossRef] - Wang, Y.; Zhao, Y.; Addepalli, S. Practical options for adopting recurrent neural network and its variants on remaining useful life prediction. Chin. J. Mech. Eng.
**2021**, 34, 69. [Google Scholar] [CrossRef] - Mou, Q.; Wei, L.; Wang, C.; Luo, D.; He, S.; Zhang, J.; Xu, H.; Luo, C.; Gao, C. Unsupervised domain-adaptive scene-specific pedestrian detection for static video surveillance. Pattern Recognit.
**2021**, 118, 108038. [Google Scholar] [CrossRef] - Alhudhaif, A.; Polat, K.; Karaman, O. Determination of COVID-19 pneumonia based on generalized convolutional neural network model from chest X-ray images. Expert. Syst. Appl.
**2021**, 180, 115141. [Google Scholar] [CrossRef] - Deng, Z.; Wang, Z.; Tang, Z.; Huang, K.; Zhu, H. A deep transfer learning method based on stacked autoencoder for cross-domain fault diagnosis. Appl. Math. Comput.
**2021**, 408, 126318. [Google Scholar] [CrossRef] - Yang, B.; Xu, S.; Lei, Y.; Leu, C.G.; Stewart, E.; Roberts, C. Multi-source transfer learning network to complement knowledge for intelligent diagnosis of machines with unseen faults. Mech. Syst. Signal Process.
**2022**, 162, 108095. [Google Scholar] [CrossRef] - Kim, S.; Choi, Y.Y.; Kim, K.J.; Choi, J.L. Forecasting state-of-health of lithium-ion batteries using variational long short-term memory with transfer learning. J. Energy Storage
**2021**, 41, 102893. [Google Scholar] [CrossRef] - Pan, D.; Li, H.; Wang, S. Transfer learning-based hybrid remaining useful life prediction for lithium-ion batteries under different stresses. IEEE Trans. Instrum. Meas.
**2022**, 71, 3501810. [Google Scholar] [CrossRef] - Chen, H.; Zhan, Z.; Jiang, P.; Sun, Y.; Liao, L.; Wan, X.; Du, Q.; Chen, X.; Song, H.; Zhu, R.; et al. Whole life cycle performance degradation test and RUL prediction research of fuel cell MEA. Appl. Energy
**2022**, 310, 118556. [Google Scholar] [CrossRef] - Li, J.; Lu, J.; Chen, C. Tool wear state prediction based on feature-based transfer learning. Int. J. Adv. Manuf. Technol.
**2021**, 113, 3283–3301. [Google Scholar] [CrossRef] - Ding, Y.; Jia, M.; Miao, Q.; Huang, P. Remaining useful life estimation using deep metric transfer learning for kernel regression. Reliab. Eng. Syst. Saf.
**2021**, 212, 107583. [Google Scholar] [CrossRef] - Ding, Y.; Ding, P.; Jia, M. A novel remaining useful life prediction method of rolling bearings based on deep transfer auto-encoder. IEEE Trans. Instrum. Meas.
**2021**, 70, 3509812. [Google Scholar] [CrossRef] - Shen, F.; Yan, R. A new intermediate domain SVM-based transfer model for rolling bearing RUL prediction. IEEE ASME Trans. Mechatron.
**2021**, 27, 1357–1369. [Google Scholar] [CrossRef] - Mao, W.; Liu, J.; Chen, J.; Liang, X. An interpretable deep transfer learning-based remaining useful life prediction approach for bearings with selective degradation knowledge fusion. IEEE Trans. Instrum. Meas.
**2022**, 71, 3508616. [Google Scholar] [CrossRef] - Xia, P.; Huang, Y.; Li, P.; Liu, C.; Shi, L. Fault knowledge transfer assisted ensemble method for remaining useful life prediction. IEEE Trans. Ind. Inform.
**2021**, 18, 1758–1769. [Google Scholar] [CrossRef] - Cheng, H.; Kong, X.; Wang, Q.; Ma, H.; Yang, S. The two-stage RUL prediction across operation conditions using deep transfer learning and insufficient degradation data. Reliab. Eng. Syst. Saf.
**2022**, 225, 108581. [Google Scholar] [CrossRef] - Zhuang, J.; Jia, M.; Ding, Y.; Ding, P. Temporal convolution-based transferable cross-domain adaptation approach for remaining useful life estimation under variable failure behaviors. Reliab. Eng. Syst. Saf.
**2021**, 216, 107946. [Google Scholar] [CrossRef] - Miao, M.; Yu, J.; Zhao, Z. A sparse domain adaption network for remaining useful life prediction of rolling bearings under different working conditions. Reliab. Eng. Syst. Saf.
**2022**, 219, 108259. [Google Scholar] [CrossRef] - Miao, M.; Yu, J. A deep domain adaptative network for remaining useful life prediction of machines under different working conditions and fault modes. IEEE Trans. Instrum. Meas.
**2021**, 70, 3518214. [Google Scholar] [CrossRef] - Li, X.; Li, J.; Zuo, L.; Zhu, L.; Shen, H.T. Domain adaptive remaining useful life prediction with transformer. IEEE Trans. Instrum. Meas.
**2022**, 71, 3521213. [Google Scholar] [CrossRef] - Fan, Y.; Nowaczyk, S.; Rögnvaldsson, T. Transfer learning for remaining useful life prediction based on consensus self-organizing models. Reliab. Eng. Syst. Saf.
**2020**, 203, 107098. [Google Scholar] [CrossRef] - Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural. Inf. Process. Syst.
**2017**, 30, 1–11. [Google Scholar] - Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021. [Google Scholar]
- Saxena, A.; Goebel, K.; Simon, D.; Eklund, N. Damage propagation modeling for aircraft engine run-to-failure simulation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6 October 2008. [Google Scholar]
- Hu, C.; Youn, B.D.; Wang, P.; Yoon, J.T. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life. Reliab. Eng. Syst. Saf.
**2012**, 103, 120–135. [Google Scholar] [CrossRef] - Ramasso, E.; Saxena, A. Performance benchmarking and analysis of prognostic methods for CMAPSS datasets. Int. J. Progn. Health. Manag.
**2014**, 5, 1–15. [Google Scholar] [CrossRef] - Ma, J.; Su, H.; Zhao, W.-L.; Liu, B. Predicting the remaining useful life of an aircraft engine using a stacked sparse autoencoder with multilayer self-learning. Complexity
**2018**, 2018, 3813029. [Google Scholar] [CrossRef] - Ma, J.; Liu, X.; Zou, X.; Yue, M.; Shang, P.; Kang, L.; Jemei, S.; Lu, C.; Ding, Y.; Zerhouni, N.; et al. Degradation prognosis for proton exchange membrane fuel cell based on hybrid transfer learning and intercell differences. ISA Trans.
**2021**, 113, 149–165. [Google Scholar] [CrossRef] [PubMed] - Wu, J.; Hu, K.; Cheng, Y.; Zhu, H.; Shao, X.; Wang, Y. Data-driven remaining useful life prediction via multiple sensor signals and deep long short-term memory neural network. ISA Trans.
**2020**, 97, 241–250. [Google Scholar] [CrossRef] [PubMed] - Elsheikh, A.; Yacout, S.; Ouali, M.-S. Bidirectional handshaking LSTM for remaining useful life prediction. Neurocomputing
**2019**, 323, 148–156. [Google Scholar] [CrossRef] - Kong, Z.; Cui, Y.; Xia, Z.; Lv, H. Convolution and long short-term memory hybrid deep neural networks for remaining useful life prognostics. Appl. Sci.
**2019**, 9, 4156. [Google Scholar] [CrossRef]

**Figure 3.**Transferability measurement based on TL-EDM and high-transferability source domain selection.

Symbol | Description |
---|---|

T24 | Total temperature at fan inlet (◦R) |

T30 | Total temperature at LPC outlet (◦R) |

Ps30 | Static pressure at HPC outlet (psia) |

PHI | Ratio of fuel flow to Ps30 (pps/psi) |

P30 | Total pressure at HPC outlet (psia) |

T50 | Total temperature at LPT outlet (◦R) |

BPR | Bypass ratio |

Nf | Physical fan speed (rpm) |

Engine Numbers | |
---|---|

Health baseline engines | 77#, 82#, 94#, 14#, 8#, 1#, 46#, 60#, 27#, 81# |

Fault baseline engines | 55#, 61#, 21#, 83#, 7#, 39#, 90#, 72#, 65#, 15# |

Parameter Name | Win_{d} | Win_{HI} | Thre_{a} |

Parameter values | 5 | 10 | 0.00075 |

Parameter Name | Ther_{d-up} | Ther_{d-low} | S | r |

Parameter values | 1/3 | 2/3 | 10 | 1.5 |

Parameter Name | Input Layer Neuron | Encoder-1 Neuron | Encoder-2 Neuron | Encoder-3 Neuron | Encoder-4 Neuron | Decoder-1 Neuron |

Parameter values | 14 | 14 | 10 | 8 | 6 | 4 |

Parameter name | Decoder-2 neuron | Output layer neuron | Encoder activation function | Decoder activation function | Loss function | Optimizer |

Parameter values | 2 | 1 | LeakyRelu | tanh | MSE | Adam |

Parameter Name | L2 Regularization Coefficient | Epoch | Batch Size | Dropout |

Parameter values | 0.05 | 1000 | 200 | 0.08 |

Method/Index | Total Score | Acceptable Rate | Mean Relative Accuracy | RMSE |

Proposed framework | 278.15 | 70% | 95.24 | 12.34 |

Without setting transfer timing | 1567.77 | 44% | 91.18 | 21.95 |

Without the general model transfer | 312.05 | 55% | 94.11 | 13.92 |

Without setting training data weights | 326.64 | 69% | 94.40 | 15.00 |

Without transfer scheme | 1682.28 | 40% | 90.96 | 22.32 |

DLSTM [36] | 655 | - | - | 18.33 |

BHSLSTM [37] | 376.64 | 63% | - | - |

C-LSTM [38] | 303 | - | 84.66% | 16.127 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, X.; Ma, J.; Song, D.
Personalized Transfer Learning Framework for Remaining Useful Life Prediction Using Adaptive Deconstruction and Dynamic Weight Informer. *Axioms* **2023**, *12*, 963.
https://doi.org/10.3390/axioms12100963

**AMA Style**

Liu X, Ma J, Song D.
Personalized Transfer Learning Framework for Remaining Useful Life Prediction Using Adaptive Deconstruction and Dynamic Weight Informer. *Axioms*. 2023; 12(10):963.
https://doi.org/10.3390/axioms12100963

**Chicago/Turabian Style**

Liu, Xue, Jian Ma, and Dengwei Song.
2023. "Personalized Transfer Learning Framework for Remaining Useful Life Prediction Using Adaptive Deconstruction and Dynamic Weight Informer" *Axioms* 12, no. 10: 963.
https://doi.org/10.3390/axioms12100963