Next Article in Journal
Au Nanoparticle-Loaded UiO-66 Metal–Organic Framework for Efficient Photocatalytic N2 Fixation
Next Article in Special Issue
APSO-SL: An Adaptive Particle Swarm Optimization with State-Based Learning Strategy
Previous Article in Journal
Artificial Intelligence for Hybrid Modeling in Fluid Catalytic Cracking (FCC)
Previous Article in Special Issue
From Segmentation to Classification: A Deep Learning Scheme for Sintered Surface Images Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach to Data Modeling via Temporal and Spatial Alignment

School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Processes 2024, 12(1), 62; https://doi.org/10.3390/pr12010062
Submission received: 1 December 2023 / Revised: 22 December 2023 / Accepted: 25 December 2023 / Published: 27 December 2023

Abstract

:
It is important for data modeling to comply with a data observation window of physical variables behind the data. In this paper, a multivariate data alignment method is proposed to follow different time scales and different role effects. First, the length of the sliding windows is determined by the frequency characteristics of the time-series reconstruction. Then, the time series is aligned to the length of the window by a sequence-to-sequence neural network. This neural network is trained by replacing the loss function with dynamic time warping (DTW) in order to prevent the losses of the time series. Finally, the attention mechanism is introduced to adjust the effect of different variables, which ensures that the data model of the matrix is in accord with the intrinsic relation of the actual system. The effectiveness of the approach is demonstrated and validated by the Tennessee Eastman (TE) model.

1. Introduction

The data of complex industrial processes mainly appear in the form of time series, which contain rich process information [1,2]. In continuous processes, people often grasp the inherent laws through modeling [3,4,5,6], which can be classified as either mechanistic or data models. The data model, which is built by directly using data acquired from sensors or supervisory control and data acquisition (SCADA), shows great potential in many fields of process monitoring, fault detection and diagnosis, energy systems, and so on [7,8,9,10], because these data can precisely represent the process states in current environments.
The data model can be used in two directions for multivariate time series in complex industrial processes. One approach of prioritizing the temporal feature is to establish various mathematical forms along the timeline direction, which includes numerous empirical formulas, an autoregressive model (AR), a moving average model (MA), an auto-regressive moving average model (ARMA), narrow and deep neural networks (NN), and so on. Reference [11] demonstrated lemmas and theorems about the least squares and multi-innovation least squares parameter estimation algorithms after reviewing and surveying some important contributions and extended the results of the least squares and multi-innovation least squares algorithms for linear regressive systems with white noise to other systems with colored noise. Reference [12] proposed a filtering-based extended stochastic gradient (ESG) algorithm. The filtering-based multi-innovation ESG algorithm was then proposed for improving the parameter estimation accuracy of a multivariable system with moving average noise. In [13], an auto-regressive integrated moving average (ARIMA) model was used to estimate the values of the predictor variables. An algorithm combined the time-series analysis methods with machine learning techniques for predicting the remaining useful life (RUL) of aircraft engines. Reference [14] proposed a deep neural network with contrastive divergence for short-term natural gas load forecasting. The proposed deep network outperformed traditional artificial neural networks by a 9.83% weighted mean absolute percent error in 62 operating areas. In [15], a deep residual shrinkage network was developed to improve the feature learning ability from highly noised vibration signals, achieving a high fault-diagnosing accuracy. Reference [16] provided a state-of-the-art review of deep learning and different methods that made the successful training of deep learning models possible at a very high scale in various modem practices. This kind of approach, where the mathematical equivalent model, whether explicit or implicit, is based on historical data, is suitable for invariant steady operation during the process. When the actual situation differs from the historical situation it forms a significant error. Some scholars have proposed online parameter adjustments, adaptive training, and other measures to improve this. In [17], a novel identification technique with a high-gain observer-based identification approach was proposed for systems with bounded process and measurement noises, which resulted in the identified parameters and the response curves being applied to a gas turbine engine using the recorded input data from the engine testbed. In [18], a hybrid novel approach was presented that integrated a data-driven method with a coarse model to improve operational conversion efficiency in air-conditioning refrigeration. In [19], a separation method was used to transform the original optimization problem into quadratic and nonlinear optimization problems in order to overcome the estimation difficulty due to the highly nonlinear relations between the parameters and the output of the radial basis function-based state-dependent autoregressive model. However, online monitoring in a time-varying complex system still presents challenges using this method.
The other approach to prioritizing states of spatial relations is to directly construct a data matrix by selecting a specific observation window length, whereby the time axis is scanned by sliding the windows. This kind of model brings the spatial relationships of variables into data matrices within a finite time, which are then analyzed by a matrix theory or multidimensional statistical theory. The typical model includes principal component analysis (PCA), independent component analysis (ICA) and so on. Reference [20] considers critically how principal component analysis (PCA) is employed with spatial data and provides a brief guide that includes robust and compositional PCA variants, links to factor analysis, latent variable modeling, and multilevel PCA. Reference [21] gives an overview of the analysis of spatial and of functional data observed over complicated two-dimensional domains. Reference [22] presents different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, Eigen functions, and Eigen modes. Recently, deep neural networks have also been considered for use in spatial relationships. Reference [23] provided an overview of traditional statistical and machine learning perspectives for modeling spatial and spatiotemporal data, and then focused on a variety of hybrid models that have recently been developed for latent process, data, and parameter specifications. Reference [24] provided a comprehensive overview of methods to analyze deep neural networks and an insight as to how those interpretable and explainable methods help us to understand time-series data. This kind of approach has an inherent advantage for real-time monitoring because it directly uses the real-time data to construct data matrices.
A standard procedure for data modeling is to first determine the length of a sliding window based on experiences, then preprocess the data (e.g., filtered, normalized, etc.) and finally build the data matrix for analysis. To construct a data matrix, an equal length of data for the sliding window must be used. However, the data series describing a complex industrial object is essentially a reflection of the characteristics of the variables. Different physical variables have different time-scale characteristics; for example, temperature usually has strong inertia, and an excessive sampling frequency will obtain a large number of similar data, which have a negative impact on the analysis of the matrix model. The flow rate is affected by fluid flow, so an excessive sampling frequency conversely brings noise to disturb the measurement of the actual state of the fluid, resulting in control difficulties. Because vibration signals reflect rapid changes in the equipment state in real time, low sampling frequencies may miss some characteristics. Therefore, data alignment is a prerequisite for constructing a window matrix. There are two types of modes, expansion and compression, for solving the alignment problem of data of different lengths. The expansion mode aligns data with a shorter length to data with a longer length with data-filling or embedding technology. Conversely, the compression mode aligns data with a longer length to data with a shorter length using data division or segmenting. Reference [25] introduced a generative adversarial network (GAN) framework to generate synthetic data pertaining to data imputation. In [26], a multivariate time-series generative adversarial network was proposed for multivariate time-series distribution modeling by introducing multi-channel convolution into GANs; this is a mathematical approach that does not consider the properties and effects of underlying physical quantities, which can lead to differences between the data matrix and the actual system. In fact, these kinds of physical variables have relatively stable effects on the system within a limited time window but are susceptible to changes due to the influence of the data matrix.
The aim of this paper is to solve three problems faced by data modelling: (1) different time scales; (2) different roles of variables; and (3) the difficulty in determining the length of the window. This paper proposes a modeling method to make up the gap between modeling and analysis by constructing a data matrix. The effectiveness of the proposed approach was validated by the Tennessee Eastman (TE) model. The advantages of this method are as follows.
(1)
This approach makes a surrogate of the original sequence through the reconstructed data sequence, which owns the main frequency information of the original sequence. The frequency information of the surrogate is applied to determine the minimum length of the sliding window. This overcomes the difficulty in determining the length of the sliding window through empirical methods;
(2)
A sequence-to-sequence method is used to map the data sequence from different time scales to a time window of the same length, which preserves the information of the original time series as completely as possible due to keeping the relationship of the original time series. This achieves data alignment at different time scales;
(3)
The variables’ relationship with the previous sliding window is utilized to adjust the current data matrix by introducing the attention mechanism with a consideration of the contribution rates of different variables to the system, thereby enhancing the robustness of the data matrix.
The rest of this paper is organized as follows. In Section 2, the primaries of the data matrix of the time series, sequence-to-sequence model and dynamic time warping algorithm are given. Section 3 describes the proposed approach, including the integral structure, determination of window length, S2S transformation, matrix modification and the workflow. The verification is provided in Section 4, followed by the conclusion in Section 5.

2. Primaries

2.1. Data Matrix

For the time series x = x 1 , x 2 , , x l , x l + 1 , the Hankel matrix is constructed as Equation (1):
H L = x l x l + 1 x l + T 1 x l + 1 x l + 2 x l + T x l + L 1 x l + L x l + L + T 1 L × T
where x l is an initial sample that is randomly selected from the time series, L is the depth of the Hankel matrix and T is the length of observation window.
The Hankel matrix is usually taken as a data model for time series because it owns a clear physical meaning with the rows of the matrix being of equal length. It has good structural characteristics suitable for communication protocols, such as differential encoding (DE) and differential phase shift keying (DPSK), which transmit data by altering the phase or phase difference. The Hankel matrix can help to understand and handle phase changes or phase differences. For some time-series processing algorithms, the recursive equation method implements the decoding or decryption of signals by calculating the determinant or inverse of the Hankel matrix. The eigenvalue and eigenvector method can be used to analyze properties, such as stability and periodicity, of the Hankel matrix.

2.2. Sequence-to-Sequence Model

The sequence-to-sequence model proposed by Bengio [27] is a type of encoder–decoder neural network specialized for time series. It consists of two parts: an “encoder” that encodes the input information followed by converting it into a form of vector, and a “decoder” that converts the vector into an output sequence. The output sequence will keep pace with the input sequence by mapping with training. The encoder is formulated as Equations (2)–(4):
h e , t = t a n h ( W [ h e , t 1 , x i ] + b )
o e , t = s o f t m a x ( V h e . t + c )
c = t a n h ( U h e . T )
where h e , t is the hidden state, o e , t is the output, c is the encoder output semantic vector, W , U and V are the weight matrixes and h e . T is the last hidden state of the encoder.
The decoder is formulated as Equations (5) and (6):
h d , t = t a n h ( W [ h t 1 , y t 1 , c ] + b
o d , t = s o f t m a x ( V h t + c )
where c is the semantic vector received from the encoder, h d , t is the initialized hidden states, and y t 1 is a start signal.

2.3. DTW Algorithm

The dynamic time warping (DTW) algorithm [28] is used to measure the similarity of two time series of different lengths. It is supposed that there are two time series Q = { q 1 , q 2 , q n } and C = { c 1 , c 2 , c m } of different lengths. A matrix grid of n × m is constructed in which the matrix elements w ( i , j ) represent the distance between point q i and point c j . The DTW algorithm is used to find a warping path through a matrix grid and the grid points that are passed through are the points where the two sequences are aligned. Therefore, the DTW algorithm becomes an optimization problem whose form is as Equation (7):
D T W ( Q , C ) = min 1 K k = 1 K w k
where K is the number of grid points passed through and w k is the distance corresponding to the selected grid point.

3. The Proposed Method

3.1. The Integral Structure

The integral structure of the proposed approach is shown in Figure 1.
The structure consists of three parts: the window length module, the S2S transformation module, and the matrix modification module.
The window length is used to determine the length of the observation window. For a data model of the time series, the characteristics of an industrial process are only obtained through observation windows. The longer the length of observation window, the richer the information process. However, this will affect the real-time analysis of the time series because too many sampling points require a longer amount of time. In practice, the window length can only be set based on experiences because the relation between the physical characteristics of the object and the length of the observation window are unknown. Considering that secondary information and noises make few contributions in process monitoring, the time series can be reconstructed with a certain allowable accuracy. Therefore, this reconstructed time series has complete sequence information after the construction process. The length of the observation window can be easily determined according to this information.
The S2S transformation implements the function of transforming different window lengths to a uniform length. The number of samples in the observation window whose length is determined by the window length module is not the same because the physical variables behind the time series have different time scales. This creates the dilemma of not establishing a window data matrix due to the different lengths of data. The S2S module forms a uniform sequence of equal lengths to meet the spatial alignment requirements by transforming different lengths to the same length for observation window sequences without losing time-series information.
The matrix modification module corrects the deviation between matrix and process characteristics caused by the different impacts of variables behind the window data on the process. With help from the window length module and the S2S transformation module, a data matrix for the observation window is established under a standard length. However, this establishment process is based on the independent modeling of each channel for different variables, which implies that each variable has an equal effect on the process. In fact, the contribution of different variables is not equal to the process, especially for varying processes. Taking into account the different effects of the variables on the process, the attention mechanism is applied to redistribute the occupied weights of each variable, thus forming a final model based on the window data.

3.2. Window Length Module

The method of determining the sliding window length is based on frequency analysis in our previous work. The idea behind this method is to replace the original time series with a clearly defined signal and use this clear information to construct the observation window. Specifically, frequency domain decomposition of the time series is performed and the time series is reconstructed within the allowable range of accuracy by using frequency signals that have a significant impact on the original signal. The frequency values of the constructed components are obtained during the reconstruction process. The minimum common multiple of these periods corresponding to the frequencies are used to form a new cycle. The window corresponding to this cycle is regarded as the sliding window length because this cycle contains all the information of the structural components. The main steps of this method include frequency decomposition, frequency reconstruction, information loss assessment, effective frequencies determination, and obtaining the window length. More details are provided in reference [29].

3.3. S2S Transformation Module

S2S Transformation module includes a sequence-to-sequence construction and DTW-based training.
(1)
Sequence-to-sequence construction
In the classical sequence-to-sequence model, the encoder and decoder both use RNN, because the semantic coding contains the information of the whole input sequence. The encoder is responsible for compressing the input sequence into a vector of specified length, which can be regarded as the semantics of that sequence. The decoder takes the output of the previous moment as the input of the next moment, and at the same time, decoding is performed according to the semantic vector output by the encoder as the input of the decoder at each step. Here, the long short-term memory (LSTM) neural network is selected as the carrier of the sequence-to-sequence transformation. The construction is based on the recurrent neural network (RNN) model by adding thresholds to solve the short-term memory problem of the RNN so that the LSTM neural network can effectively use long-term temporal information. The structure is shown in Figure 2 where A is a repetitive structural unit and C is the semantic vector.
The LSTM adds three logical control units, the input threshold, output threshold, and forgetting threshold, to the basic structure of the RNN, and each of these is combined with a multiplication original by setting the weights at the connecting edges of the memory cells and other parts of the neural network, so that the input and output of the information flow and the state of the cell unit are controlled.
The input threshold is a value that determines whether new information can be written to the LSTM cell. If the signal input to the LSTM cell is below the input threshold, the signal will not be written to the cell. This prevents the cell from being affected by noise or meaningless information. The input threshold can be thought of as a switch that controls the inflow of information.
The output threshold determines the information that can be read from or output to the LSTM cell. If the internal state of the cell, also known as the cell state, falls below the output threshold, it will not be output. This prevents the cell from outputting irrelevant or confusing information. The output threshold can be considered a filter that regulates the flow of information.
The forget threshold is a value that determines which old information can be erased or forgotten from the LSTM cell. If the internal state of the cell falls below the forget threshold, a portion of that state will be forgotten or erased. This enables the LSTM cell to adapt to new information without being influenced by old, irrelevant information. The forget threshold can be thought of as a ‘scavenger’ that controls the forgetting and updating of information.
These weights are set and updated by other parts of the neural network, in addition to the logical control unit described earlier. The storage unit of the LSTM contains specific weights for input gates, forget gates, output gates, and candidate cell states. The weights’ initial values are typically sampled from a random distribution and then updated during training using optimization algorithms such as back propagation and gradient descent. By continuously adjusting these weights, the LSTM can better process sequential data and achieve long-term dependent memory capabilities.
The formula for the LSTM is defined as Equations (8)–(13):
f t = s i g m o i d ( W f · h t 1 , x t + b f )
i t = s i g m o i d ( W i · h t 1 , x t + b i )
O t = s i g m o i d ( W o · h t 1 , x t + b o )
c ~ t = t a n h ( W c · h t 1 , x t + b c )
c t = f t × c t 1 + i t c ~
h t = o t × t a n h ( c t )
where W * represents the recursive connection weights of their corresponding thresholds, respectively. The s i g m o i d and the t a n h are two types of activation functions, i t is the input gate, f t is the forget gate,   o t is the output gate, c t is the memory cell, h t is the hidden state, and c ˜ t is the candidate memory cell.
(2)
DTW-based training
In the training of a sequence-to-sequence model, the data features at the time t are first inputted to the input layer and the results are outputted after the excitation function, and then the output results, the output of the hidden layer at the time t 1 and the information stored in the cell at the time t 1 are inputted to the nodes of the LSTM, and then the data are outputted to the next hidden layer or the output layer after the input gate, the output gate, the forget gate and the cell, and the results of the LSTM nodes are outputted to the neurons of the output layer. The back propagation errors of Euclidean distance are calculated, and the individual weights are updated.
However, the Euclidean distance is not effective in calculating the distance between the two time series because the time-series length between the source and the target is inconsistent. To find a parsimonious representation of a short sequence of a long time series, in order to make the short sequence as similar as possible to the original sequence, the dynamic time warping (DTW) algorithm is introduced to measure the degree of similarity between the two sequences of different lengths. The optimal warping path is defined as an error loss function and the boundary conditions are adopted as w 1 = ( 1,1 ) and w K = ( m , n ) , indicating that the start point is at the lower-left corner of the grid and the end point is at the upper-right corner of the grid, the kth element of the warping path is denoted as w k = ( i , j ) k . Under the above conditions, the optimization problem shown in Equation (14) is solved using the dynamic programming algorithm.
γ ( i , j ) = d ( q i , c j ) + min { γ ( i 1 , j 1 ) , γ ( i 1 , j ) , γ ( i , j 1 ) }
where γ ( i , j ) denotes the sum of cumulative distances to the point ( i , j ) .

3.4. Matrix Modification Module

For the multivariate model, the influence of different variables on the final matrix is further investigated. Considering the positive effect of SENet [30] on the role of the channel, the attention mechanism is adopted to adjust the weights of the roles of each variable. SENet introduces a squeeze module and an excitation module, wherein in the squeeze operation the feature maps of each channel are globally averaged and compressed into a single feature vector, and in the excitation operation, a fully connected layer, and a non-linear activation function (e.g., sigmoid or ReLU) are used to learn the weights of each channel to capture the relationship between the channels. Finally, these weights are applied to the original feature maps via a dot-multiplication operation to produce a weighted matrix model, allowing the model to update the feature maps adaptively. The entire procedure is represented by Equation (15):
X = S c a l e ( X ) = X · s i g m o i d ( W 2 · R e L U ( W 1 · P o o l ( X ) ) )
where X is the input feature map, X is the final output feature map, P o o l is the global average pooling operation, R e L U is the activation function, R e L U , and s i g m o i d is the activation function s i g m o i d , W 1 R C × C and W 2 R C × C are parameters of the fully connected layer, in which C is a smaller dimension.
Using samples collected by previous time series to construct the data matrix A of an observation window with the standard length, L. Then, a singular value decomposition is applied for data matrices according to Equation (16):
A = USV
Take the diagonal elements of s to form a vector [ S 1,1 S 2,2 S ( m , m ) ] and obtain the target vector G after normalization
G = [ S 1,1 i = 1 m S i , i S 2,2 i = 1 m S i , i S ( m , m ) i = 1 m S i , i ] T
where S I , I is the diagonal element of S, Σ is a sum symbol, m is the number of elements, and T is a transpose symbol.
By changing the initial sampling time, many data matrices, with their corresponding target vectors, are obtained as the training database. A SENet is constructed to implement the attention mechanism by dispersing the output value to various components. Thus, the final data matrix is obtained by performing the product operation with the aligned window matrix.
The process of the proposed approach is as follows:
Step 1: determine the length of the sliding time window according to Section 3.2;
Step 2: construct the S2S transformation with the long short-term memory (LSTM) neural network, the inputs of which are sampling values from different observation windows, and the outputs are the standard observation window data;
Step 3: train taking DTW as the loss function;
Step 4: build the data matrix with the length of the standard window and construct the attention mechanism to train;
Step 5: modify the data matrix according to Section 3.4.

4. Validation

4.1. Tennessee Eastman Process Model

The Tennessee Eastman (TE) process is widely used in the study of process control technology because it is an excellent simulation of an actual chemical process [31]. The structure of the TE model is shown in Figure 3, in which the black parts are the original model, and the red parts are the modifications to meet data requirements by adding the output and disturbance.
The TE process consists of four gaseous materials involved in the reaction, A, C, D, and E, which produce two products, accompanied by a by-product. All the reactions are exothermic, and the reactions are irreversible. The process mainly consists of five operating units: a reactor, a product condenser, a gas–liquid separator, a recirculation compressor, and a vapor stripping tower. In the presence of a non-volatile gas phase catalyst, the gaseous reactants are injected into the reactor, and the ensuing liquid product is reacted in the gas phase. The reactor includes a condensation package to remove the heat produced by the process. The product is released as a gas, along with some unreacted material. The flow product from the reactor first passes through a condenser, where it condenses and enters a gas–liquid separator. In the gas–liquid separator, the uncompressed components are returned to the reactor material utilizing a centrifugal compressor; the compressed components are skimmed off in the product skimmer section to remove the remaining reactants. The TE model is built based on the unit operation of three transmissions and one inversion. The material balance calculation is applied to calculate the main material inflow and outflow of reactors, separators, and stripping towers. The heat transfer equation is used in the interlayer of the reactor, in the shell of the condenser, and in the reboiler of the stripping tower. The material flow is used to calculate the flow rate and pressure change of each pipeline between units. The gas–liquid equilibrium equation is used to calculate the gas–liquid conversion that exists in the main units.

4.2. Data Acquisition and Data Alignment

The Simulink file of TE model runs in MATLAB2020b based on Win10 with Intel(R) Core(TM)i7-10700CPU@2.90 GHz 2.90 GHz and 64 G memory. Three typical time series of pressure, temperature, and the level of the reactor were obtained and are shown in Figure 4, Figure 5 and Figure 6.
Figure 4, Figure 5 and Figure 6 show that the pressure change range is wide—between 2756–2811 kPa; the level change range is between 62.5–67.6%, with some noise because its waveform is not smooth enough; the temperature change range is between 122.86–122.94 degrees Celsius, in which the noise is very evident. The data sequence reconstruction was performed on these three channels by the frequencies with main contributions after frequency decomposition. The reconstructed sequence is indicated by the red curves in Figure 4, Figure 5 and Figure 6. It can be seen that the error between the reconstructed sequence and the original sequence is very small.
The sequence-to-sequence structure is used to unify the length of each variable and the results are shown in Figure 7. The blue curve, the red curve and the cyan curve represent the pressure, the level, and the temperature, respectively. The triangles represent the actual sampling points, and the red stars represent the aligned data. Due to the different sampling rates of the pressure, level, and temperature, the amount of data during an observation window is inconsistent. For example, at the sampling point of 1100 s, the pressure is sampled, but the level and the temperature have no values at this sampling point. Therefore, a data matrix cannot be formed by directly using actual sampling points. Compared with the unaligned triangles, the stars of the three variables have common sampling times. It is easy to organize the aligned data into a data matrix and make the next adjustment.
Figure 7 also shows the length-transformed sequence; the original sequence can roughly reflect the range of variation of the original sequence and the trend of its transformation, despite the inconsistency in the amount of data contained at the same time. This indicates that the uniform sequence processed by the sequence-to-sequence model is a good representation of the original sequence.

4.3. Matrix Modification

The SENet model was constructed for matrix modification. In the input feature map, the H and the W of H × W × C correspond to the length T and the depth L of the Hankel matrix, respectively, and the C corresponds to the number of channels. The output Y 1 of the fully connected layer exports to another fully connected layer Y 2 = σ ( W 2 Y 1 ) through excitation operating. After training based on the database from 0 s to 6000 s, the obtained output Y 2 represents the weight vector of each channel.
Ten groups of data matrixes from data sequences between 6000 s and 7000 s were randomly selected for correction by using this attention mechanism. The results are shown in Table 1.
It can be seen from Table 1 that the attention vector produces some changes with different singular values, which indicates that the attention vectors adapt to changes in the role of variables because the change in singular values implies a change in the role of the variables. Taking the first term with the greatest impact in a vector as an example, the difference values between the maximum of 10.4311 and minimum of 10.3355 approaches 0.1 before an attention mechanism is added. After adding the attention mechanism, the corresponding values for the two items are 8.4995 and 8.4638, with a bias of 0.0357. It should be noted that the values of vector elements decrease after the addition of the attention mechanism because the attention vector is normalized when items are less than 1. It can be enlarged by multiples as a whole; therefore, this does not affect the analysis of the problem because it does not change the relationship of the vector items.

4.4. Comparison with Traditional Methods

Two traditional data matrix alignment methods were used for comparison with the proposed approach. One method (embedding) is to align with the highest acquisition frequency through interpolation of a low-frequency sequence and the other method (compression) is to align with the lowest frequency based on the group average value of a high-frequency sequence. Ten sets of data were randomly selected from the original data to calculate the singular value of the data matrix to compare the model’s robustness. Simultaneously the accuracy of the model, matrix patterns, and window selection methods were compared, and the results are presented in Table 2. It is noteworthy that the frequency information contained in the corresponding observation window of the data matrix is taken as the representation of model accuracy.
It can be seen in Table 2 that the proposed modeling approach has a small matrix scale, high accuracy and robustness compared with traditional methods. This effectively solves the problem of different time scales and different roles of variables from different characteristics of the variables, thus facilitating the direct use of mathematical tools for further analysis.

5. Conclusions

This paper proposed a data modeling method that combines the data with the physical meaning of the variables behind the data. First, the alignment of the data matrix was completed through S2S transformation on the basis of reconstructing the time series. This alignment is made by not simply adding data positions, it achieves the goal of minimizing information loss throughout the observation window and implementing the overall optimization configuration of reconstructed data sampling points. Second, the data matrix of an observation window was adjusted by modifying the contribution rate through the introduction of an attention mechanism based on historical window information. The robustness of the matrix data models was improved by maximizing the utilization of the inherent features of the object, which weakens the influence of physical variables for the data matrix. In the future, our research will mine the implicit features of matrix models with a combination of application scenarios based on the data matrix established by this method.

Author Contributions

Conceptualization and methodology, D.Z.; formal analysis, K.S.; writing—original draft preparation, D.Z.; writing—review and editing, S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to acknowledge research support from the School of Electrical Engineering and Automation at Tianjin University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, Z.; Chen, M.Z.Q.; Zhang, D. Special Issue on “Advances in condition monitoring, optimization and control for complex industrial processes”. Processes 2021, 9, 664. [Google Scholar] [CrossRef]
  2. McBride, K.; Sundmacher, K. Overview of surrogate modeling in chemical process engineering. Chem. Ing. Tech. 2019, 91, 228–239. [Google Scholar] [CrossRef]
  3. Jabbari, M.; Baran, I.; Mohanty, S.; Comminal, R.; Sonne, M.R.; Nielsen, M.W.; Spangenberg, J.; Hattel, J.H. Multiphysics modelling of manufacturing processes: A review. Adv. Mech. Eng. 2018, 10, 1687814018766188. [Google Scholar] [CrossRef]
  4. Zendehboudi, S.; Rezaei, N.; Lohi, A. Applications of hybrid models in chemical, petroleum, and energy systems: A systematic review. Appl. Energy 2018, 228, 2539–2566. [Google Scholar] [CrossRef]
  5. Liu, X.J.; Ni, Z.H.; Liu, J.F.; Cheng, Y.L. Assembly process modeling mechanism based on the product hierarchy. Int. J. Adv. Manuf. Technol. 2016, 82, 391–405. [Google Scholar] [CrossRef]
  6. Hay, T.; Visuri, V.V.; Aula, M.; Echterhof, T. A review of mathematical process models for the electric arc furnace process. Steel Res. Int. 2021, 92, 2000395. [Google Scholar] [CrossRef]
  7. Ge, Z.Q. Review on data-driven modeling and monitoring for plant-wide dindustrial processes. Chemom. Intell. Lab. Syst. 2017, 171, 16–25. [Google Scholar] [CrossRef]
  8. Wen, L.; Li, X.Y.; Gao, L.; Zhang, Y.Y. A new convolutional neural network-based data-driven fault diagnosis method. IEEE Trans. Ind. Electron. 2018, 65, 5990–5998. [Google Scholar] [CrossRef]
  9. Wu, Z.; Luo, G.; Yang, Z.L.; Guo, Y.J.; Li, K.; Xue, Y.S. A comprehensive review on deep learning approaches in wind forecasting applications. CAAI Trans. Intell. Technol. 2022, 7, 129–143. [Google Scholar] [CrossRef]
  10. Gao, Z.; Liu, X. an overview on fault diagnosis, prognosis and resilient control for wind turbine systems. Processes 2021, 9, 300. [Google Scholar] [CrossRef]
  11. Ding, F. Least squares parameter estimation and multi-innovation least squares methods for linear fitting problems from noisy data. J. Comput. Appl. Math. 2023, 426, 115107. [Google Scholar] [CrossRef]
  12. Pan, J.; Jiang, X.; Wan, X.K.; Ding, W.F. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 2017, 15, 1189–1197. [Google Scholar] [CrossRef]
  13. Ordóñez, C.; Lasheras, F.S.; Roca-Pardiñas, J.; Juez, F.J.D. A hybrid ARIMA-SVM model for the study of the remaining useful life of aircraft engines. J. Comput. Appl. Math. 2019, 346, 184–191. [Google Scholar] [CrossRef]
  14. Merkel, G.D.; Povinelli, R.J.; Brown, R.H. Short-term load forecasting of natural gas with deep neural network regression. Energies 2018, 11, 2008. [Google Scholar] [CrossRef]
  15. Zhao, M.H.; Zhong, S.S.; Fu, X.Y.; Tang, B.P.; Pecht, M. Deep residual shrinkage networks for fault diagnosis. IEEE Trans. Ind. Inform. 2020, 16, 4681–4690. [Google Scholar] [CrossRef]
  16. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  17. Gao, Z.; Dai, X.; Breikin, T.; Wang, H. Novel parameter identification by using a high-gain observer with application to a gas turbine engine. IEEE Trans. Ind. Inform. 2008, 4, 271–279. [Google Scholar] [CrossRef]
  18. Zhang, D.P.; Gao, Z.W. Improvement of refrigeration efficiency by combining reinforcement learning with a coarse model. Processes 2019, 7, 967. [Google Scholar] [CrossRef]
  19. Zhou, Y.H.; Ding, F. Modeling nonlinear processes using the radial basis function-based state-dependent autoregressive models. IEEE Signal Process. Lett. 2020, 27, 1600–1604. [Google Scholar] [CrossRef]
  20. Demsar, U.; Harris, P.; Brunsdon, C.; Fotheringham, A.S.; McLoone, S. Principal component analysis on spatial data: An overview. Ann. Assoc. Am. Geogr. 2013, 103, 106–128. [Google Scholar] [CrossRef]
  21. Sangalli, L.M. Spatial Regression With Partial Differential Equation Regularisation. Int. Stat. Rev. 2021, 89, 505–531. [Google Scholar] [CrossRef]
  22. Klus, S.; Nüske, F.; Koltai, P.; Wu, H.; Kevrekidis, I.; Schütte, C.; Noé, F. Data-Driven Model Reduction and Transfer Operator Approximation. J. Nonlinear Sci. 2018, 28, 985–1010. [Google Scholar] [CrossRef]
  23. Wikle, C.K.; Zammit-Mangion, A. Statistical Deep Learning for Spatial and Spatiotemporal Data. Annu. Rev. Stat. ITS Appl. 2023, 10, 247–270. [Google Scholar] [CrossRef]
  24. Choi, J. Interpreting and Explaining Deep Neural Networks: A Perspective on Time Series Data. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Virtual Event, CA, USA, 6–10 July 2020; pp. 3563–3564. [Google Scholar]
  25. Sarda, K.; Yerudkar, A.; Del Vecchio, C. Missing Data Imputation for Real Time-series Data in a Steel Industry using Generative Adversarial Networks. In Proceedings of the IECON 2021—47th Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 13–16 October 2021. [Google Scholar] [CrossRef]
  26. Guo, Z.J.; Wan, Y.M.; Ye, H. A data imputation method for multivariate time series based on generative adversarial network. Neurocomputing 2019, 360, 185–197. [Google Scholar] [CrossRef]
  27. Oriol, V.; Alexander, T.; Samy, B.; Dumitru, E. Show and tell: Lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 652–663. [Google Scholar]
  28. Lines, J.; Bagnall, A. Time series classification with ensembles of elastic distance measures. DATA Min. Knowl. Discov. 2015, 29, 565–592. [Google Scholar] [CrossRef]
  29. Zhang, D.; Zhao, J.; Xie, Y. Determining the length of sliding window by using frequency decomposition. In Proceedings of the 6th International Symposium on Autonomous Systems (ISAS), Nanjing, China, 23–25 June 2023; pp. 1–5. [Google Scholar]
  30. Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E.H. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef]
  31. Tennessee Eastman Challenge Archive. Available online: http://depts.washington.edu/control/LARRY/TE/download.html#Topics (accessed on 1 December 2023).
Figure 1. The integral structure of proposed approach.
Figure 1. The integral structure of proposed approach.
Processes 12 00062 g001
Figure 2. The structure of the sequence-to-sequence model.
Figure 2. The structure of the sequence-to-sequence model.
Processes 12 00062 g002
Figure 3. The structure of TE model.
Figure 3. The structure of TE model.
Processes 12 00062 g003
Figure 4. Reactor pressure output sequence.
Figure 4. Reactor pressure output sequence.
Processes 12 00062 g004
Figure 5. Reactor temperature output sequence.
Figure 5. Reactor temperature output sequence.
Processes 12 00062 g005
Figure 6. Reactor level output sequence.
Figure 6. Reactor level output sequence.
Processes 12 00062 g006
Figure 7. Three sequences after data alignment.
Figure 7. Three sequences after data alignment.
Processes 12 00062 g007
Table 1. The results with and without attention mechanisms.
Table 1. The results with and without attention mechanisms.
NoAttention VectorSingular Value of Data Matrix
Before Attention MechanismsAfter Attention Mechanisms
1(0.8209, 0.0973, 0.0817)(10.3430, 0.9434, 0.6881)(8.4915, 0.0918, 0.0562)
2(0.8221, 0.0943, 0.0836)(10.3361, 0.9399, 0.6836)(8.4974, 0.0886, 0.0571)
3(0.8206, 0.0949, 0.0847)(10.3366, 0.9482, 0.6789)(8.4819, 0.0899, 0.0574)
4(0.8189, 0.0979, 0.0833)(10.3355, 0.9405, 0.6848)(8.4638, 0.0919, 0.0570)
5(0.8237, 0.0952, 0.0811)(10.3429, 0.954, 0.6959)(8.5196, 0.0908, 0.0564)
6(0.8139, 0.1013, 0.0848)(10.4274, 0.9390, 0.7164)(8.4872, 0.0951, 0.0607)
7(0.8204, 0.0977, 0.0817)(10.3377, 0.9440, 0.6896)(8.4814, 0.0923, 0.0563)
8(0.8223, 0.0965, 0.0812)(10.3414, 0.9412, 0.6848)(8.5038, 0.0908, 0.0556)
9(0.8148, 0.0991, 0.0861)(10.4311, 0.9453, 0.7120)(8.4995, 0.0937, 0.0613)
10(0.8192, 0.0967, 0.0841)(10.3794, 0.9326, 0.7177)(8.5128, 0.0893, 0.0603)
Table 2. Comparison of different methods.
Table 2. Comparison of different methods.
MethodsSingular Value of Data MatrixRobustnessAccuracyMatrix SizeWindow Length Selection
The expansion method(10.0246, 2.5754, 1.5550)LowHighLargeExperiences
(9.9982, 2.5985, 1.4329)
(10.6550, 1.3009, 0.9945)
(10.0237, 2.6001, 1.4518)
(10.7375, 1.3601, 1.0190)
(9.8053, 2.5810, 1.5083)
(10.4980, 1.4309, 0.9373)
(9.9861, 2.0944, 1.3275)
(10.4412, 1.5361, 1.2409)
(10.4842, 1.5255, 1.2490)
The compression method(6.7779, 1.0470, 0.7972)HighLowSmallExperiences
(6.7886, 1.0173, 0.7994)
(6.7175, 1.0483, 0.8169)
(6.7740, 1.2642, 0.7789)
(6.7746, 1.0476, 0.8315)
(6.7725, 1.0259, 0.8037)
(6.7404, 1.2327, 0.9102)
(6.7647, 0.9975, 0.8060)
(6.7653, 1.2190, 0.9224)
(6.7800, 1.0256, 0.8012)
The proposed methodSeen in Table 1HighHighRelation with accuracyFrequencies of reconstructed sequences
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, D.; Sun, K.; Zhang, S. An Approach to Data Modeling via Temporal and Spatial Alignment. Processes 2024, 12, 62. https://doi.org/10.3390/pr12010062

AMA Style

Zhang D, Sun K, Zhang S. An Approach to Data Modeling via Temporal and Spatial Alignment. Processes. 2024; 12(1):62. https://doi.org/10.3390/pr12010062

Chicago/Turabian Style

Zhang, Dapeng, Kaixuan Sun, and Shumei Zhang. 2024. "An Approach to Data Modeling via Temporal and Spatial Alignment" Processes 12, no. 1: 62. https://doi.org/10.3390/pr12010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop