Next Article in Journal
On the Efficiency of a Lightweight Authentication and Privacy Preservation Scheme for MQTT
Previous Article in Journal
Car Price Quotes Driven by Data-Comprehensive Predictions Grounded in Deep Learning Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RAdam-DA-NLSTM: A Nested LSTM-Based Time Series Prediction Method for Human–Computer Intelligent Systems

1
College of Information Science and Technology, Zhejiang Shuren University, Hangzhou 310015, China
2
Binjiang Institute of Zhejiang University, Hangzhou 310053, China
3
Department of Information Technology, Kennesaw State University, Atlanta, GA 30144, USA
4
College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3084; https://doi.org/10.3390/electronics12143084
Submission received: 25 June 2023 / Revised: 11 July 2023 / Accepted: 12 July 2023 / Published: 16 July 2023

Abstract

:
At present, time series prediction methods are widely applied for Human–Computer Intelligent Systems in various fields such as Finance, Meteorology, and Medicine. To enhance the accuracy and stability of the prediction model, this paper proposes a time series prediction method called RAdam-Dual stage Attention mechanism-Nested Long Short-Term Memory (RAdam-DA-NLSTM). First, we design a Nested LSTM (NLSTM), which adopts a new internal LSTM unit structure as the memory cell of LSTM to guide memory forgetting and memory selection. Then, we design a self-encoder network based on the Dual stage Attention mechanism (DA-NLSTM), which uses the NLSTM encoder based on the input attention mechanism, and uses the NLSTM decoder based on the time attention mechanism. Additionally, we adopt the RAdam optimizer to solve the objective function, which dynamically selects Adam and SGD optimizers according to the variance dispersion and constructs the rectifier term to fully express the adaptive momentum. Finally, we use multiple datasets, such as PM2.5 data set, stock data set, traffic data set, and biological signals, to analyze and test this method, and the experimental results show that RAdam-DA-NLSTM has higher prediction accuracy and stability compared with other traditional methods.

1. Introduction

As a crucial component of the data-driven economy, the vast amount of data generated by Human–Computer Intelligent Systems is playing a critical role in the current social and economic landscape [1,2]. Most of them belong to time series. Therefore, the time series prediction method is a research hotspot in Human–Computer Intelligent Systems.
At present, the application of time series prediction methods led by RNNs (Recurrent Neural Networks) is extremely extensive in the fields of Finance [3,4], Meteorology [5,6], Medicine [7], etc. RNNs have a certain memory ability by introducing local or global feedback connections into the forward structure. However, the gradient disappearance and gradient explosion of an RNN will make its prediction accuracy low in practical applications [8]. On the contrary, LSTM (Long Short-Term Memory) [9] uses memory units to replace hidden layer neurons, which can deal with problems related to time series more effectively. When there are enough training samples, LSTM can fully mine the information contained in massive data and has the ability of deep learning [10,11]. Therefore, some scholars began to study LSTM to solve the problems of RNNs. Karevan et al. [12] established a data-driven forecast model based on LSTM to predict the weather. Xie et al. [13] used LSTM to predict blood glucose levels in patients with type 1 diabetes. Pathan et al. [14] used a prediction model based on LSTM to predict the future mutation rate of the COVID-19 virus.
In recent years, scholars have focused on exploring more possibilities of LSTM and making it more effective in processing high-dimensional complex big data, and began to study its optimization methods, especially feature aggregation, network structure improvement, and objective function optimization.
In feature aggregation, some scholars study autoencoder networks, which are widely popular for their successful application in the direction of machine translation [15,16,17]. Its core idea is to transform the input sequence into a fixed-length vector through encoding, and then transform the previously generated fixed vector into the output sequence through decoding. However, the performance of the autoencoder network deteriorates rapidly with the increase in input sequence length [18,19]. Therefore, inspired by human attention theory, some scholars studied the automatic coding network based on the attention mechanism, which can adaptively select the input (or feature) subset to improve its ability to analyze long sequences. Hra [20] proposed a momentum LSTM autoencoder network based on an attention mechanism to realize behavior recognition, which has better simulation results than traditional methods. Baddar [21] proposed an LSTM autoencoder network with attention mode variation to realize face recognition, which improved the accuracy of face recognition. Pandey [22] adopted LSTM as an encoder to achieve machine translation, which improved the effectiveness of machine translation. However, the autoencoder network based on the attention mechanism has only been proven effective in machine translation, face recognition, and image processing, and there are few studies on time series prediction [23,24,25].
In network structure improvement, Cho et al. [26] proposed the Gated Recurrent Unit (GRU), which has simplified the LSTM structure while ensuring the original classification results. Inspired by LSTM and GRU, Sun et al. [27] proposed the Gated Memory Unit (GMU) and evaluated it from the aspects of parameter volume, convergence, and accuracy. The results showed that GMU is a potential choice for handwriting recognition tasks. Lei et al. [28] proposed the Simple Recurrent Unit (SRU), which improved the training speed of the LSTM for recognition tasks. The above models ignore their performance to improve the model training speed and simplify the network structure. However, the nonlinear mapping ability of the model is particularly important in time series prediction. Scholars often take stacked network structure [29,30] to improve the model’s ability, but the model training speed is often not ideal. How to improve the network structure of LSTM to improve its nonlinear mapping ability and ensure the training speed is the focus of scholars.
In objective function optimization, common optimizers include the Stochastic Gradient Descent (SGD) optimizer, Adaptive Gradient (Adagrad) optimizer, Root Mean Square Prop (RMSProp) optimizer, Adaptive Moment Estimation (Adam) optimizer, etc. [31,32,33]. The convergence effect of the SGD optimizer is excellent, but the convergence speed is not ideal. The Adagrad optimizer relies on global learning rates and tends to become stuck at local extremum points when training time is too long. The RMSProp optimizer adaptively adjusts the magnitude of the gradient in each direction but is prone to gradient explosions. The Adam optimizer combines the advantages of AdaGrad and RMSProp and can calculate the updated step size by considering the first and second-moment estimation of the gradient, and the convergence speed is faster. However, the adaptive learning rate of the Adam optimizer can produce large variance fluctuation and easily fall into the local optimal solution. Therefore, Liu et al. [34] proposed a modified Adam optimizer—Rectified Adam optimizer (RAdam), which can dynamically select Adam and SGD optimizer according to variance dispersion, and construct a rectifier term. The adaptive momentum as a potential variance function is allowed to be fully expressed slowly but steadily, it can enhance the stability of model training. Therefore, RAdam has the advantages of both Adam and SGD, which can ensure fast convergence speed and it is difficult to fall into the local optimal solution.
In this paper, we propose a novel time series prediction method called RAdam-Dual Stage Attention Mechanism-Nested Long Short-Term Memory (RAdam-DA-NLSTM) through the exploration and research of the above-related work. Our approach involves several key contributions as follows:
  • We introduce Nested LSTM (NLSTM), an internal LSTM unit structure designed to guide memory forgetting and memory selection. By incorporating NLSTM as the memory cell of LSTM, we enhance prediction accuracy.
  • We develop an autoencoder network based on the Dual Stage Attention Mechanism (DA-NLSTM). This network utilizes an NLSTM encoder with an input attention mechanism and an NLSTM decoder with a time attention mechanism. This design addresses the attention dispersion issue present in traditional LSTM architectures. It effectively captures long-term time dependencies in time series data and enhances feature aggregation within the network.
  • We employ the RAdam optimizer to optimize the objective function. RAdam dynamically selects between the Adam and SGD optimizers based on variance dispersion. Additionally, we introduce a rectifier term to bolster the model’s stability.
In summary, our proposed RAdam-DA-NLSTM method represents an innovative approach to time series prediction. It encompasses advancements in model enhancement, feature aggregation optimization, and objective function optimization. These contributions have significantly contributed to the field of time series forecasting and paved the way for further research and practical applications.

2. RAdam-DA-NLSTM

Radam-DA-NLSTM adopts the self-encoder network model structure and has four layers, including the input layer, encoder, decoder, and output layer. It adopts two Nested LSTMs as encoder and decoder, respectively, which use the input attention mechanism to optimize the NLSTM1 encoder and uses the time attention mechanism to optimize the NLSTM2 decoder. And they use the Radam optimizer to update the DA-NLSTM network objective function during encoding and decoding. Figure 1 is the block diagram of the model construction.
Input layer input matrix is X = x 1 1 , , x t 1 , , x T 1 x 1 n , , x t n , , x T n x 1 N , , x t N , , x T N and target sequence is Y = ( y 1 , , y t 1 , , y T 1 ) . Through learning the time series prediction model of Radam-DA-NLSTM, obtain a mapping function F to predict the unknown values y T .
Y ^ = F ( y 1 , , y t , , y T 1 , X )
where x t n denotes the information of n sequences in the t-th time step.

2.1. Nested LSTM

Nested LSTM uses a new internal LSTM structure to replace the memory cells of the traditional LSTM. When accessing the internal memory, it is gated in the same way. Therefore, the Nested LSTM can access the internal memory more pertinently [11], it makes the Nested LSTM prediction model have stronger processing ability and higher prediction accuracy. Figure 2 shows the Nested LSTM unit model structure.
Nested LSTM is divided into internal LSTM and external LSTM. Their gating systems are consistent with the traditional LSTM. Nested LSTM has four gating systems, namely forget gate, input gate, candidate memory cell, and output gate. The calculation formulas for each gate are as follows:
Forget gate:
f t = σ ( W f x x t + W f h h t 1 + b f )
Input gate:
i t = σ ( W i x x t + W i h h t 1 + b i )
Candidate memory cell:
c t ˜ = tanh ( W c x x t + W c h h t 1 + b c )
Memory cell: The input and hidden states of the internal LSTM are:
h ¯ t 1 = f t · c t 1
x ¯ t = i t · c t ˜
f ¯ t = σ ( W ¯ f x x t ¯ + W ¯ f h h ¯ t 1 + b ¯ f )
i ¯ t = σ ( W ¯ i x x t ¯ + W ¯ i h h ¯ t 1 + b ¯ i )
c ¯ ˜ t = tanh ( W ¯ c x x t ¯ + W ¯ c h h ¯ t 1 + b ¯ c )
o ¯ t = σ ( W ¯ o x x t ¯ + W ¯ o h h ¯ t 1 + b ¯ o )
c ¯ t = f ¯ t · c ¯ t 1 + i ¯ t · c ¯ ˜ t
h ¯ t = o ¯ t · tanh ( c ¯ t )
The update method of external LSTM memory cell is:
c t = h ¯ t
Output gate:
o t = σ ( W o x x t + W o h h t 1 + b o )
A new round of hidden state:
h t = o t · tanh ( c t )
where σ denotes the sigmoid function. In the external LSTM, W f x and W f h denote the weight matrix of the forget gate; W i x and W i h denote the weight matrix of the input gate; W c x and W c h denote the weight matrix of the candidate memory cell; W o x and W o h denote the weight matrix of the output gate; b f , b i , b c and b o denote the bias of the forget gate, input gate, candidate memory cell, and output gate, respectively. In the internal LSTM, x ¯ t , h ¯ t 1 and c ¯ t 1 denote the current input, the hidden state and memory cell of the previous round, respectively. W ¯ f x and W ¯ f h denote the weight matrix of the forget gate; W ¯ i x and W ¯ i h denote the weight matrix of the input gate; W ¯ c x and W ¯ c h denote the weight matrix of the candidate memory cell; W ¯ o x and W ¯ o h denote the weight matrix of the output gate; b ¯ f , b ¯ i , b ¯ c and b ¯ o denote the bias of the forget gate, input gate, candidate memory cell, and output gate, respectively; x ¯ t , h ¯ t 1 and c ¯ t 1 denote the current input, the hidden state and memory cell of the previous round, respectively.
The output of the output layer is:
y t = σ ( W y h h t )
where W y h denotes the weight matrix of the output layer.

2.2. DA-NLSTM

DA-NLSTM includes the NLSTM encoder based on the input attention mechanism and the NLSTM decoder based on the time attention mechanism.

2.2.1. The NLSTM Encoder Based on Input Attention Mechanism

The NLSTM encoder based on the input attention mechanism is composed of the input attention mechanism and NLSTM1. Figure 3 shows its structure.
RAdam-DA-NLSTM uses the input attention on the NLSTM encoder to preprocess X. The query, key, and value corresponding to the input attention are as follows: query is the splicing of the last hidden state h t 1 and cell state c t 1 of NLSTM1, key is the whole sequence information, value is the same as the key. We can calculate the attention score e t n by query and key, and normalize it by softmax to obtain the weight α t n of each sequence:
e t n = V e T tanh ( W e [ h t 1 ; s t 1 ] + U e x n )
α t n = exp ( e t n ) i = 1 N exp ( e t i )
where V e T , W e and u e denote the training parameters, [ h t 1 ; s t 1 ] denotes the query of the input attention; x n denotes the n-th training sequence, i.e., the key of the input attention, tan denotes the function tan. Then, we can obtain the preprocessed data from each sequence weight and sequence information:
X ˜ = n = 1 N α t n x t n
Then we input X ˜ to NLSTM1, and finally, we can obtain the hidden state of the coding layer corresponding to each time point t:
h t = f 1 ( h t 1 , X ˜ )
where f 1 denotes the calculation method of unit NLSTM1.

2.2.2. The NLSTM Decoder Based on Time Attention Mechanism

The NLSTM decoder based on the time attention mechanism is composed of the time attention mechanism and NLSTM2. Figure 4 shows its structure.
RAdam-DA-NLSTM uses the time attention on the decoder to preprocess h t . The query, key, and value corresponding to the time attention are as follows: query is the splicing of the last hidden state h t 1 and cell state s t 1 of NLSTM2, the key is the hidden state h t of NLSTM1 at each time point, and value is the same as the key. We can calculate the attention score l t m by query and key, and normalize the attention score by softmax to obtain the weight β t m of the hidden state corresponding to each time point.
l t m = V d T tanh ( W d [ d t 1 ; s t 1 ] + U d h m ) , 1 m T
β t m = exp ( l t m ) j = 1 T exp ( l t j )
where V d T , W d , U d denote the training parameters, [ d t 1 ; s t 1 ] denotes the query of the time attention, h m denotes the hidden state of NLSTM1, i.e., the key of the time attention. Through each sequence weight β t m and sequence information of the hidden state, we can obtain the updated hidden layer state with all time points:
c t = m = 1 T β t m h m
Then, we can obtain Y ˜ = ( y 1 ˜ , y t 1 ˜ , , y T 1 ˜ ) by the combination [ y t 1 ; c t 1 ] of the updated hidden layer state and the known target sequence Y = ( y 1 , , y t 1 , , y T 1 ) :
y t 1 ˜ = w T [ y t 1 ; c t 1 ] + b
where [ y t 1 ; c t 1 ] denotes the combination of the decoder input and the updated hidden layer state, w T and b denote the size parameters of the combination mapped to the decoder. Then, we input y t 1 ˜ to NLSTM2 to obtain the hidden layer state d t corresponding for each time point t.
d t = f 2 [ d t 1 ; y t 1 ˜ ]
where f 2 denotes the calculation method of unit NLSTM2. The final output Y ^ is:
Y ^ = F ( y 1 , , y t , , y T 1 , X ) = V y T ( W y [ d T ; c T ] + b w ) + b v
where [ d T ; c T ] denotes the combination of the hidden layer state of NLSTM2 and the updated hidden layer state, W y and b w denote the size parameters of the combination mapped to the decoder, V y and b v denote the weight and bias of the final result obtained for the linear function.

2.3. RAdam Optimizer

We use the RAdam optimizer to optimize the objective function, i.e., we use the SGD momentum optimizer at the initial stage of training, then switch to the improved Adam optimizer at a certain time according to the potential divergence of variance. And it builds a rectifier term, which allows the adaptive momentum to be fully expressed slowly but stably as a function of potential square difference, which can improve the stability of model training. Therefore, RAdam has the advantages of both Adam and SGD, which not only ensure fast convergence but also make it difficult to fall into the local optimal solution at the beginning of training.
In this paper, we use the square loss as the objective function, and the formula is as follows:
J ( Y , Y ^ ) = 1 N i = 1 N ( Y i Y ^ i ) 2
where N denotes the number of training samples. Y i denotes the target sequence value of the training sample and Y ^ i denotes the predicted sequence value of the training sample. Figure 5 is the flow chart of the RAdam optimizer solving the objective function, and the steps are as follows.
Step 1.
Input the step size α t , decay rate β 1 , β 2 , t = 0 .
Step 2.
Initialize moving 1st moment and moving 2nd moment, calculate the maximum length ρ of the approximated SMA.
ρ = 2 2 ( 1 β 2 ) ( 1 β 2 ) 1
Step 3.
t = t + 1 , calculate the gradient g t of objective function, update moving 1st moment m t and moving 2nd moment v t , revise moving 1st moment m t ^ , and calculate the maximum length ρ t of approximated SMA.
g t = θ J t ( θ t 1 )
v t = β 2 v t 1 + ( 1 β 2 ) g t 2
m t = β 1 m t 1 + ( 1 β 1 ) g t
m t ^ = m t m t ( 1 β 1 t ) ( 1 β 1 t )
ρ t = ρ 2 t β 2 t 2 t β 2 t ( 1 β 2 t ) ( 1 β 2 t )
where θ denotes gradient solution, J t ( θ t 1 ) denotes objective function, θ t 1 denotes the model parameters when t 1 .
Step 4.
Calculate θ t according to ρ t . If ρ t > 4 , adopt Adam optimizer, revise moving 2nd moment, and build a rectifier item r t , then obtain the revised moving 2nd moment value v t ^ and the model parameters θ t .
v t ^ = v t v t ( 1 β 2 t ) ( 1 β 2 t )
r t = ( ρ t 4 ) ( ρ t 2 ) ρ ( ρ 4 ) ( ρ 2 ) ρ t
θ t = θ t 1 α t r t m t ^ α t r t m t ^ v t ^ v t ^
If ρ t 4 , adopt SGD+Momentum optimizer, then obtain the training parameters θ t .
θ t = θ t 1 α t m t ^
Step 5.
Output the model parameters θ t .

3. Experiment and Simulation

3.1. Data Sources

To prove the effectiveness of RAdam-DA-NLSTM, we apply the model to PM2.5 prediction, stock prediction, traffic prediction, and biological signal prediction, respectively.
PM2.5 prediction: We use the Beijing PM2.5 dataset for prediction. This dataset is a series of 43,824 time steps collected by the U.S. Embassy in Beijing from 1 January 2010, to 31 December 2014, including the current time, PM2.5 concentration, dew point, temperature, pressure, wind direction, wind speed, hours, rainfall hours, etc. at Beijing Capital International Airport.
Stock prediction: We use the Nasdaq 100 stock data set (Nasdaq 100) for prediction. The data set covers a series of 40,560-time steps from 26 July 2016 to 22 December 2016, including the stock price data of 81 major companies under the Nasdaq 100 index.
Traffic prediction: We use the California traffic volume data set of 24 sections in the California transportation performance measurement system (PEMs) for traffic volume prediction and use the Seattle traffic speed data set for traffic speed prediction. The sampling time interval of California traffic flow data is 5 min, and the data time range is 61 days from 1 May 2014 00:00:00 to 30 June 2014 23:59:00. Seattle speed data are the vehicle speed data set from Seattle in 2015 collected by 323 detectors, and the sampling interval is also 5 min.
Biological signal prediction: We use two types of biological signals: ECG signal and BCG signal. The ECG signal was obtained from the dynamic ECG database of sudden cardiac death, available on PhysioNet. This dataset includes ECG signals from patients who experienced actual cardiac arrest. The signal was collected using two leads, and the sampling frequency was set to 250 Hz. To ensure accuracy, the ECG signal was meticulously annotated by medical experts, identifying the starting point of the sudden cardiac death beat. The BCG signal was obtained from a large-scale and complex dataset provided by a medical device company. The dataset contains recordings of cardio ballistics from over 100 patients with abnormal cardio ballistics, spanning from 2016 to 2020. This dataset belongs to a million-level time series, making it a valuable resource for predictive analysis in the field of bio-signal-assisted prediction. By analyzing these diverse and comprehensive datasets, we aimed to derive meaningful insights and improve the accuracy of predictions in the context of biological signal analysis.
The above data conforms to the high-dimensional complex characteristics and is suitable for testing the effectiveness of RAdam-DA-NLSTM.

3.2. Parameter Setting

RAdam-DA-NLSTM needs to set four important parameters, namely time steps L, encoder hidden units number m 1 , decoder hidden units number m 2 , and batch size b. These parameters were obtained through iterative experiments. Table 1 shows the model parameter setting results of the above data.

3.3. Comparative Analysis

We use 70% of the data sets as the training set, 10% of the data sets as the validation set and 20% of the data sets as the test set. To further reflect the advantages of the RAdam-DA-NLSTM, we use SVM, RNN, GRU, LSTM, attention LSTM, and DA-LSTM prediction models to conduct 20 experiments on the data set, and use four evaluation indexes: mean absolute error ( M A E ), mean absolute percentage error ( M A P E ), root mean square error ( R M S E ) and coefficient of determination ( R 2 ) to evaluate and analyze the prediction accuracy.
M A E = 1 N i = 1 N Y i Y i ^
M A P E = 100 % N i = 1 N Y i Y i ^ Y i
R M S E = 1 N i = 1 N ( Y i Y i ^ ) 2
R 2 = 1 i = 1 N | Y i ^ Y i | i = 1 N | Y i ^ Y i ¯ |
where N denotes the number of samples, Y i denotes the target sequence value of samples, Y i ^ denotes the predicted sequence value of samples and Y i ¯ denotes the target sequence average value of samples.

3.3.1. PM2.5 Prediction

In recent years, air pollution has become extremely serious, and the problem of air quality has attracted more and more attention. Judging the air quality according to the concentration of pollutants in the air reflects the degree of air pollution. PM2.5 refers to particles with a diameter less than or equal to 2.5 microns in the atmosphere, also known as particles that can enter the lung. Although the content of PM2.5 in the earth’s atmosphere is relatively small, it contains a large number of toxic and harmful substances and has a long residence time and long transportation distance in the atmosphere, which greatly affects the air quality and has a direct or indirect impact on human health and plant growth. Therefore, it is very necessary to monitor and predict PM2.5 concentration in real time. The following are the prediction results of this model on the Beijing PM2.5 test data set, and Figure 6 shows the prediction fitting diagram.
Figure 6 shows that the trend of the predicted value and the real value curve of the RAdam-DA-NLSTM on the Beijing PM2.5 test data set is roughly consistent, indicating that the fitting result is ideal. Further, we compare the evaluation results of the Beijing PM2.5 test data set predicted by the above seven models. The evaluation results take the average value of 20 experiments. Table 2 and Figure 7 show the results.
Table 2 and Figure 7 present the prediction results of M A E , M A P E , R M S E , and R 2 for the Beijing PM2.5 test dataset, comparing RAdam-DA-NLSTM with other models. It is evident that RAdam-DA-NLSTM outperforms the other models in terms of smaller M A E , M A P E , and R M S E , values, as well as having a higher R 2 score. These results highlight the enhanced prediction accuracy of RAdam-DA-NLSTM for the Beijing PM2.5 dataset and its advantages over other models in PM2.5 prediction. Notably, the R 2 (TS) metric reflects the prediction performance on the training set. By observing the R 2 (TS) values, we can notice that LSTM achieves better training results. However, when it comes to the test results, LSTM performs significantly worse compared to A-LSTM, DA-LSTM, and RAdam-DA-NLSTM. This discrepancy indicates that LSTM tends to overfit the PM2.5 prediction.

3.3.2. Stock Prediction

Stock prediction in the financial field has always been a hot topic in time series prediction. Stock forecasting refers to the behavior of predicting the future development direction of the stock market or the rise and fall range of stocks according to the development of the stock market. Short-term stock prediction is of great significance for stock investors to analyze the market rhythm and manage the investment risk of holding shares. We use the Nasdaq 100 index stock test data set for prediction in this paper, and Figure 8 shows the fitting diagram for Nasdaq 100 index stock test data.
Figure 8 shows that the trend of the predicted value and real value curve of the RAdam-DA-NLSTM on the Nasdaq 100 index stock test data set is also roughly consistent, and the fitting result is quite well. In addition, it is necessary to compare the evaluation results of the above seven models in predicting the Nasdaq 100 index stock test data set. The evaluation results take the average value of 20 experiments. Table 3 and Figure 9 show the results.
Table 3 and Figure 9 provide compelling evidence that RAdam-DA-NLSTM achieves superior results on the Nasdaq-100 stock test dataset compared to other models. Specifically, RAdam-DA-NLSTM demonstrates smaller M A E , M A P E , and R M S E values, indicating its enhanced forecasting accuracy. Furthermore, the R 2 value of RAdam-DA-NLSTM outperforms other models, underscoring its robust analytical capability when applied to complex stock datasets. These findings highlight the effectiveness of RAdam-DA-NLSTM in predicting stock market behavior and its potential for generating valuable insights in the domain of financial analysis.

3.3.3. Traffic Prediction

The traffic information system provides fast traffic guidance for cities. Traffic volume prediction and traffic speed prediction are the key points in the traffic information system. However, urban traffic has its characteristics, the traffic flow and traffic speed data are hard to estimate. Therefore, traffic information prediction is highly significant but not easy. We adopt the RAdam-DA-NLSTM to predict the California vehicle volume data set and Seattle vehicle speed data set, and Figure 10 and Figure 11 show the fitting diagrams.
Figure 10 and Figure 11 show that the RAdam-DA-NLSTM has good fitting results for the California traffic volume data set and Seattle speed data set. Especially in the fitting of the California vehicle volume data set, it greatly highlights the advantages of the model. Then, we further compare the evaluation results of the traffic data set predicted by the above seven models. Table 4 and Table 5 and Figure 12 and Figure 13 show the results.
Table 4 and Table 5 and Figure 12 and Figure 13 illustrate that the prediction errors for the California volume dataset and the Seattle speed dataset are lower for RAdam-DA-NLSTM compared to other models. Additionally, the obtained R 2 values are relatively large, indicating the robust prediction capabilities of RAdam-DA-NLSTM for the aforementioned datasets. The findings from this study demonstrate the model’s strong prediction abilities for traffic information, emphasizing its significant practical significance in the field of traffic prediction.

3.3.4. Biological Signal Prediction

ECG (Electrocardiogram) signal has strong nonlinearity, non-stationary nature, and randomness, making it both precise and challenging to analyze. It is important for patients with potential Sinus rhythm, coronary heart disease, hypertension, and other diseases to predict ECG signals in advance and identify sudden cardiac death, which can save lives through timely intervention during sudden cardiac events. Figure 14 illustrates the fitting diagram depicting RAdam-DA-NLSTM’s performance in predicting ECG signal datasets. BCG (Ballistocardiogram) is a graphical representation of the cardiac shock signal generated by the movement of the heart in response to blood ejection. It carries valuable information about cardiac function and condition, and can provide early indications of potential cardiac abnormalities. Extracting heart-related information from cardiac shocks using BCG signals and detecting abnormal cardiac shocks to diagnose heart disease can significantly assist in remote patient monitoring. Figure 15 showcases the fitting diagram of RAdam-DA-NLSTM for BCG signal dataset prediction.
Figure 14 and Figure 15 show that the RAdam-DA-NLSTM has a strong nonlinear mapping ability for ECG signal prediction and BCG signal prediction. Then, we compare the evaluation results of the ECG signal data set predicted by the above seven models. Table 6 and Table 7 and Figure 16 and Figure 17 show the results.
Table 6 and Table 7 and Figure 16 and Figure 17 show that RAdam-DA-NLSTM exhibits strong predictive ability on biological signal datasets, with smaller M A E , M A P E , and R M S E , and larger R 2 compared to other models. This highlights the model’s ability to effectively predict biological signal data. Similar to the previous experiments, RAdam-DA-NLSTM may not achieve the best results on the training set, but it delivers relatively good results on the test set. This once again validates the model’s superior capacity to address underfitting and overfitting issues, ensuring accurate predictions.

4. Discussion

Time series prediction methods are extensively utilized across various domains, including Finance, Healthcare, Environment, and Transportation. For instance, it has become possible to predict future trends and fluctuations in stock prices by analyzing historical stock market data. This is crucial for investors, traders, and fund managers as they can make informed investment decisions, manage portfolios, and develop effective trading strategies based on forecasted outcomes. Similarly, in the realm of atmospheric pollution, analyzing and modeling historical air quality data allows for predicting future PM2.5 levels. This information proves valuable for environmental departments, city planners, and public health organizations, as they can take appropriate measures to combat air pollution and safeguard public health.
Time series prediction also plays a vital role in transportation planning and management. By analyzing historical traffic data, including road traffic flow, congestion level, and public transportation demand, future traffic volume and congestion can be predicted. Consequently, urban transportation planners and traffic management organizations can utilize these predictions to formulate transportation plans, optimize traffic signal control, and provide effective traffic management solutions.
Furthermore, time series prediction holds significant importance in the medical field, particularly in analyzing biometric signals such as Electrocardiogram (ECG) and Ballistocardiogram (BCG). By modeling and analyzing these signals, it is possible to predict the progression of patients’ conditions, assess disease risks, and identify changes in physiological states. This empowers doctors, researchers, and healthcare providers to implement crucial medical interventions, devise personalized treatment strategies, and deliver improved healthcare based on projected outcomes, thereby enabling timely intervention and potentially saving lives.
In this paper, a novel time series prediction method called RAdam-DA-NLSTM is proposed, focusing on feature aggregation optimization, model enhancement, and objective function optimization. Experimental evaluations are conducted across four domains with six datasets, including stock data, PM2.5 data, traffic speed data, traffic volume data, ECG, and BCG. RAdam-DA-NLSTM exhibits superior fitting capabilities when compared to six comparative algorithms (SVM, RNN, GRU, LSTM, A-LSTM, and DA-LSTM) using four evaluation indicators ( M A E , M A P E , R M S E , and R 2 ). By addressing underfitting and overfitting issues prevalent in most models, this paper demonstrates the robust nonlinear mapping abilities of RAdam-DA-NLSTM in processing high-dimensional, complex, and nonlinear data.
However, it is important to acknowledge the limitations of the proposed time series prediction method in this paper. The experiments were limited to one-step predictions in order to facilitate better comparisons across different domains. While this approach is meaningful for forecasting PM2.5 levels and stock prices, it is essential to conduct multi-step prediction experiments for time series data with compact time frequencies and urgent demands, such as BCG signals. These experiments would further validate the long-term prediction capabilities of RAdam-DA-NLSTM. Additionally, it is crucial to note that demonstrating the generalization performance of RAdam-DA-NLSTM based solely on four domains may be an exaggeration. Future research and experiments must encompass a wider range of domains, taking into account the unique characteristics of each domain. This will contribute to the field of time series forecasting through more effective and specialized studies.

5. Conclusions

In the field of Human–Computer Intelligent Systems, our study proposes a powerful time series prediction model called RAdam-DA-NLSTM, which employs a self-encoder architecture. This model improves the memory ability of the system using Nested LSTM as encoder and decoder. Additionally, it includes both input and time attention mechanisms to enhance the feature cohesion ability of the model. We integrate the RAdam optimizer to solve the objective function, which results in a more stable prediction system. In our simulation experiment, we used different datasets such as Beijing PM2.5, Nasdaq 100, California traffic flow, Seattle traffic speed, ECG signals, and BCG signals. The results demonstrate that RAdam-DA-NLSTM has higher prediction accuracy and stability.
For future work, our team will explore multi-step prediction of time series, integrate our prediction method in more applications, and contribute to the time series prediction of the Internet of Things (IoT) world. Our goal is to improve the efficiency and accuracy of predictions in the field of Human–Computer Intelligent Systems.

Author Contributions

Methodology, B.L.; Writing—original draft, W.C.; Writing—review & editing, M.H.; Supervision, Z.W., S.P. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Public Welfare Technology Application and Research Projects of Zhejiang Province of China under Grant No. LGF21F010004, the “Ling Yan” Research and Development Project of Science and Technology Department of the Zhejiang Province of China under Grant No. 2023C03189.

Data Availability Statement

Data available on request due to restrictions eg privacy or ethical. The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feng, G.; Li, Z.; Zhou, W. Research summary of big data analysis technology in network field. Comput. Sci. 2019, 46, 20. [Google Scholar]
  2. Yu, J.; Xu, Y.; Chen, H.; Ju, Z. Versatile graph neural networks toward intuitive human activity understanding. IEEE Trans. Neural Netw. 2022, 1–13. [Google Scholar] [CrossRef] [PubMed]
  3. Pahlawan, M.R.; Riksakomara, E.; Tyasnurita, R.; Muklason, A.; Vinarti, R.A. Stock price forecast of macro-economic factor using recurrent neural network. IAES Int. J. Artif. Intell. 2021, 10, 74–83. [Google Scholar] [CrossRef]
  4. Wankuan, B. Research and Application of RNN Neural Network in Stock Index Price Prediction Model. Ph.D. Thesis, Chongqing University, Chongqing, China, 2019. [Google Scholar]
  5. Dai, X.; Liu, J.; Li, Y. A recurrent neural network using historical data to predict time series indoor PM2.5 concentrations for residential buildings. Indoor Air 2021, 31, 1228–1237. [Google Scholar] [CrossRef] [PubMed]
  6. Huang, Y.; Zhao, H.; Huang, X. A Prediction Scheme for Daily Maximum and Minimum Temperature Forecasts Using Recurrent Neural Network and Rough set. IOP Conf. Ser. Earth Environ. Sci. 2019, 237, 022005. [Google Scholar] [CrossRef]
  7. Wunsch, A.; Pitak-Arnnop, P. Strategic planning for maxillofacial trauma and head and neck cancers during COVID-19 pandemic—December 2020 updated from Germany. Am. J. Otolaryngol. 2021, 42, 102932. [Google Scholar] [CrossRef] [PubMed]
  8. Bengio, Y. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 2002, 5, 157–166. [Google Scholar] [CrossRef]
  9. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  10. Yu, J.; Gao, H.; Zhou, D.; Liu, J.; Gao, Q.; Ju, Z. Deep temporal model-based identity-aware hand detection for space human–robot interaction. IEEE Trans. Cyber. 2022, 15, 13738–13751. [Google Scholar] [CrossRef]
  11. Wang, Y.; Xie, D.; Wang, X.; Li, G.; Zhu, M.; Zhang, Y. Wind turbine network interaction prediction based on pca-lstm model. Chin. J. Electr. Eng. 2019, 39, 11. [Google Scholar]
  12. Karevan, Z.; Suykens, J. Transductive LSTM for time-series prediction: An application to weather forecasting. Neural Netw. 2020, 125, 1–9. [Google Scholar] [CrossRef] [PubMed]
  13. Xie, J.; Wang, Q. Benchmarking Machine Learning Algorithms on Blood Glucose Prediction for Type I Diabetes in Comparison with Classical Time-Series Models. IEEE Trans. Biomed. Eng. 2020, 67, 3101–3124. [Google Scholar] [CrossRef] [PubMed]
  14. Pathan, R.K.; Biswas, M.; Khandaker, M.U. Time Series Prediction of COVID-19 by Mutation Rate Analysis using Recurrent Neural Network-based LSTM Model. Chaos Solitons Fractals 2020, 138, 110018. [Google Scholar] [CrossRef]
  15. Cho, K.; Merrienboer, B.V.; Bahdanau, D.; Bengio, Y. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. Comput. Sci. 2014; in press. [Google Scholar]
  16. Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  17. Feng, J.; Li, Y.; Zhao, K.; Xu, Z.; Jin, D. DeepMM: Deep Learning Based Map Matching with Data Augmentation. IEEE Trans. Mob. Comput. 2020, 21, 2372–2384. [Google Scholar] [CrossRef]
  18. Rao, H.; Xu, S.; Hu, X.; Cheng, J.; Hu, B. Augmented Skeleton Based Contrastive Action Learning with Momentum LSTM for Unsupervised Action Recognition—ScienceDirect. Inf. Sci. 2021, 569, 90–109. [Google Scholar] [CrossRef]
  19. Baddar, W.J.; Ro, Y.M. Mode Variational LSTM Robust to Unseen Modes of Variation: Application to Facial Expression Recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3215–3223. [Google Scholar]
  20. Pandey, D.; Chowdary, R. Modeling coherence by ordering paragraphs using pointer networks. Neural Netw. 2020, 126, 36–41. [Google Scholar] [CrossRef]
  21. Teng, X.; Zhang, Y.; Zhou, D.; He, M.; Han, M.; Liu, X. A two-stage deep learning model based on feature combination effects. Neurocomputing 2022, 512, 307–322. [Google Scholar] [CrossRef]
  22. Tang, Y.; Yu, F.; Pedrycz, W.; Yang, X.; Liu, S. Building trend fuzzy granulation based LSTM recurrent neural network for long-term time series forecasting. IEEE Trans. Fuzzy Syst. 2021, 30, 1599–1613. [Google Scholar] [CrossRef]
  23. Gan, Y.; Mao, Y.; Zhang, X.; Ji, S.; Pu, Y.; Han, M.; Yin, J.; Wang, T. Is your explanation stable? A Robustness Evaluation Framework for Feature Attribution. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security(CCS 2022), Los Angeles, CA, USA, 7–11 November 2022. [Google Scholar]
  24. Qin, Y.; Song, D.; Chen, H.; Cheng, W.; Cottrell, G.W. A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017. [Google Scholar]
  25. Li, Y.; Zhu, Z.; Kong, D.; Han, H.; Zhao, Y. EA-LSTM: Evolutionary attention-based LSTM for time series prediction. Knowl.-Based Syst. 2019, 181, 104785.1–104785.8. [Google Scholar] [CrossRef] [Green Version]
  26. Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Comput. Sci. 2014; in press. [Google Scholar]
  27. Sun, L.; Su, T.; Zhou, S.; Yu, L. GMU: A Novel RNN Neuron and Its Application to Handwriting Recognition. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017. [Google Scholar]
  28. Tao, L.; Yu, Z. Training RNNs as Fast as CNNs. In Proceedings of the Conferenc on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark, 7–11 September 2017. [Google Scholar]
  29. Chen, P.; Fu, X. A Graph Convolutional Stacked Bidirectional Unidirectional-LSTM Neural Network for Metro Ridership Prediction. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6950–6962. [Google Scholar] [CrossRef]
  30. Dikshit, A.; Pradhan, B.; Alamri, A.M. Long lead time drought forecasting using lagged climate variables and a stacked long short-term memory model. Sci. Total Environ. 2021, 755, 142638. [Google Scholar] [CrossRef]
  31. Yuan, W.; Hu, F.; Lu, L. A new non-adaptive optimization method: Stochastic gradient descent with momentum and difference. Appl. Intell. 2021, 52, 3939–3953. [Google Scholar] [CrossRef]
  32. Bera, S.; Shrivastava, V.K. Analysis of various optimizers on deep convolutional neural network model in the application of hyperspectral remote sensing image classification. Int. J. Remote Sens. 2020, 41, 2664–2683. [Google Scholar] [CrossRef]
  33. Haoyue, Y.; Tao, S.; Yan, Z.; Yingli, L.; Zhengtao, Y. Terahertz spectrum recognition based on bidirectional long-term and short-term memory network. Spectrosc. Spectr. Anal. 2019, 39, 6. [Google Scholar]
  34. Liu, L.; Jiang, H.; He, P.; Chen, W.; Han, J. On the Variance of the Adaptive Learning Rate and Beyond. In Proceedings of the International Conference on Learning Representations (ICLR 2020), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
Figure 1. Model construction.
Figure 1. Model construction.
Electronics 12 03084 g001
Figure 2. Nested LSTM unit model structure.
Figure 2. Nested LSTM unit model structure.
Electronics 12 03084 g002
Figure 3. NLSTM Encoder.
Figure 3. NLSTM Encoder.
Electronics 12 03084 g003
Figure 4. NLSTM Decoder.
Figure 4. NLSTM Decoder.
Electronics 12 03084 g004
Figure 5. RAdam optimizer.
Figure 5. RAdam optimizer.
Electronics 12 03084 g005
Figure 6. Prediction results (Beijing PM2.5).
Figure 6. Prediction results (Beijing PM2.5).
Electronics 12 03084 g006
Figure 7. Comparison results (Beijing PM2.5).
Figure 7. Comparison results (Beijing PM2.5).
Electronics 12 03084 g007
Figure 8. Prediction results (NASDAQ 100).
Figure 8. Prediction results (NASDAQ 100).
Electronics 12 03084 g008
Figure 9. Comparison results (NASDAQ 100).
Figure 9. Comparison results (NASDAQ 100).
Electronics 12 03084 g009
Figure 10. Prediction results (California traffic volume).
Figure 10. Prediction results (California traffic volume).
Electronics 12 03084 g010
Figure 11. Prediction results (Seattle traffic speed).
Figure 11. Prediction results (Seattle traffic speed).
Electronics 12 03084 g011
Figure 12. Comparison results (California traffic volume).
Figure 12. Comparison results (California traffic volume).
Electronics 12 03084 g012
Figure 13. Comparison results (Seattle traffic speed).
Figure 13. Comparison results (Seattle traffic speed).
Electronics 12 03084 g013
Figure 14. Prediction results (ECG signal).
Figure 14. Prediction results (ECG signal).
Electronics 12 03084 g014
Figure 15. Prediction results (BCG signal).
Figure 15. Prediction results (BCG signal).
Electronics 12 03084 g015
Figure 16. Comparison results (ECG signal).
Figure 16. Comparison results (ECG signal).
Electronics 12 03084 g016
Figure 17. Comparison results (BCG signal).
Figure 17. Comparison results (BCG signal).
Electronics 12 03084 g017
Table 1. Parameter setting results.
Table 1. Parameter setting results.
DatasetsL m 1 m 2 b
Beijing PM2.510646464
NASDAQ 1001512812864
California traffic volume15646464
Seattle traffic speed20646464
ECG signal1512812864
BCG signal2012812864
Table 2. Evaluation results (Beijing PM2.5).
Table 2. Evaluation results (Beijing PM2.5).
ModelsMAEMAPERMSE R 2 R 2 (TS)
SVM1.14514.66321.23650.65320.7217
RNN0.66213.67540.73210.72290.8323
GRU0.63233.36320.69410.78920.8814
LSTM0.60983.19460.63800.79210.8920
A-LSTM0.23242.83120.59190.81260.8620
DA-LSTM0.20322.73280.55210.81810.8705
RAdam-DA-NLSTM0.19212.52170.51170.82130.8831
Table 3. Evaluation results (NASDAQ 100).
Table 3. Evaluation results (NASDAQ 100).
ModelsMAEMAPERMSE R 2 R 2 (TS)
SVM0.39010.50130.52110.71720.8272
RNN0.21630.25860.26810.76210.9031
GRU0.20310.25120.27610.76630.9226
LSTM0.18210.19310.23170.78740.9306
A-LSTM0.09230.10340.15220.80210.9155
DA-LSTM0.04780.05260.06320.83020.9207
RAdam-DA-NLSTM0.03010.05160.05310.88250.9361
Table 4. Evaluation results (California traffic volume).
Table 4. Evaluation results (California traffic volume).
ModelsMAEMAPERMSE R 2 R 2 (TS)
SVM0.29030.92330.32180.76210.8562
RNN0.16420.52150.28110.80540.9172
GRU0.13270.32910.23160.82980.9238
LSTM0.12440.22820.18430.85360.9423
A-LSTM0.08360.21320.16410.87210.9321
DA-LSTM0.06240.18010.09120.88630.9626
RAdam-DA-NLSTM0.05980.17210.08250.91360.9645
Table 5. Evaluation results (Seattle traffic speed).
Table 5. Evaluation results (Seattle traffic speed).
ModelsMAEMAPERMSE R 2 R 2 (TS)
SVM0.89314.65481.20310.65180.7931
RNN0.43623.93250.91210.70290.8216
GRU0.42103.89780.89450.71120.8344
LSTM0.38343.64270.86490.72350.8367
A-LSTM0.37563.54710.79210.76160.8721
DA-LSTM0.29212.92210.43820.79150.8925
RAdam-DA-NLSTM0.26512.69450.42130.81200.9024
Table 6. Evaluation results (ECG signal).
Table 6. Evaluation results (ECG signal).
ModelsMAEMAPERMSE R 2 R 2 (TS)
SVM0.29913.02380.48320.72320.8136
RNN0.28342.76220.40250.76210.8648
GRU0.28022.32540.38190.77960.8432
LSTM0.27252.24350.30210.77280.8560
A-LSTM0.25311.84320.25630.81260.8922
DA-LSTM0.22891.20030.20160.83450.8837
RAdam-DA-NLSTM0.22321.21890.17820.84260.8856
Table 7. Evaluation results (BCG signal).
Table 7. Evaluation results (BCG signal).
ModelsMAEMAPERMSE R 2 R 2 (TS)
SVM0.53263.24310.62380.65320.6865
RNN0.32113.01210.34420.72540.7773
GRU0.29672.83200.30020.74220.7932
LSTM0.23322.92330.31260.75270.7942
A-LSTM0.20172.53210.28120.76320.8329
DA-LSTM0.18222.32080.26360.79210.8232
RAdam-DA-NLSTM0.16752.30260.24310.80530.8321
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, B.; Chen, W.; Wang, Z.; Pouriyeh, S.; Han, M. RAdam-DA-NLSTM: A Nested LSTM-Based Time Series Prediction Method for Human–Computer Intelligent Systems. Electronics 2023, 12, 3084. https://doi.org/10.3390/electronics12143084

AMA Style

Liu B, Chen W, Wang Z, Pouriyeh S, Han M. RAdam-DA-NLSTM: A Nested LSTM-Based Time Series Prediction Method for Human–Computer Intelligent Systems. Electronics. 2023; 12(14):3084. https://doi.org/10.3390/electronics12143084

Chicago/Turabian Style

Liu, Banteng, Wei Chen, Zhangquan Wang, Seyedamin Pouriyeh, and Meng Han. 2023. "RAdam-DA-NLSTM: A Nested LSTM-Based Time Series Prediction Method for Human–Computer Intelligent Systems" Electronics 12, no. 14: 3084. https://doi.org/10.3390/electronics12143084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop