Next Article in Journal
UV-Visible Spectroscopic Technique-Data Mining Tool as a Reliable, Fast, and Cost-Effective Method for the Prediction of Total Polyphenol Contents: Validation in a Bunch of Medicinal Plant Extracts
Next Article in Special Issue
Construction of Full-View Data from Limited-View Data Using Artificial Neural Network in the Inverse Scattering Problem
Previous Article in Journal
Hysteresis in Engineering Systems
Previous Article in Special Issue
Long Short-Term Memory (LSTM)-Based Dog Activity Detection Using Accelerometer and Gyroscope
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Performance Evaluation of the Alpha-Beta (α-β) Filter Algorithm with Different Learning Models: DBN, DELM, and SVM

1
Department of Environmental & IT Engineering, Chungnam National University, Daejeon 34134, Korea
2
Department of Computer Science & Engineering, Chungnam National University, Daejeon 34134, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9429; https://doi.org/10.3390/app12199429
Submission received: 30 August 2022 / Revised: 15 September 2022 / Accepted: 16 September 2022 / Published: 20 September 2022
(This article belongs to the Special Issue Future Information & Communication Engineering 2022)

Abstract

:
In this paper, we present a new Multiple learning to prediction algorithm model that used three different combinations of machine-learning methods to improve the accuracy of the α-β filter algorithm. The parameters of α and β were tuned in dynamic conditions instead of static conditions. The proposed system was designed to use the deep belief network (DBN), the deep extreme learning machine (DELM), and the SVM as three different learning algorithms. Then these learned parameters were trained by the machine-learning algorithms tuned to the α-β filter algorithm as a prediction module, and they gave the final predicted results. The MAE and RMSE were used to evaluate the performance of the proposed α-β filter with different learning algorithms. Each algorithm recorded different best-case accuracy results; for the DBN, we achieved 3.60 and 2.61; for the DELM, we obtained the best-case result of 3.90 and 2.81; and finally, for the SVM, 4.0 and 3.21 were attained in terms of the RMSE and MAE, respectively, as compared to 5.21 and 3.95. When assessed in comparison with the typical alpha–beta filter algorithm, the proposed system provided results with better accuracy.

1. Introduction

Tracking filters are vital in target tracking problems. By using these filters, we can tackle real estimation problems, which can also help reduce tracking errors. The α-β family of filters was developed and created during the Cold War [1]. Usually, this family of filters is utilized to track stable velocity and constant acceleration by using the Kalman filter as a continual state filter. However, the ability of these filters to accurately track high-momentum maneuvering targets is minimal, characterized by jerky motions. The alpha-beta (α-β) filter algorithm [2] is a suitable method for the observer; it is mainly used for data smoothing, control, and estimation. This algorithm is very lightweight and has the same functionality as the Kalman filter. If we compare the α-β filter’s performance with the Kalman filter [3] and the linear filter, we can see that the α-β filter performs better than these filters because it does not require a complex system model. Another exciting feature of this filter is that α-β requires less memory and computation time.
State-of-the-art progress in technology, including the deep belief network (DBN) [4], the deep extreme learning machine (ELM) [5], classification and regression trees (CARTs) [6], the support vector machine (SVM) algorithm [7], and other machine-learning advancements have improved our living standards and assist us in different ways. These methods are mainly based on our current knowledge and the data extracted from existing and past data, which enable us to make progressive decisions for the future to minimize losses and achieve the maximum benefits [2]. To enable us to benefit from these algorithms to progress as much as possible, first, we train them by using historical data; the more data an algorithm has access to, the more accurate the results are. When the training stage of such an algorithm is finished, the algorithm is ready for use in the designed application. However, at this stage, one issue that arises is that these algorithms are designed for a particular task and setting; therefore, the performance of such algorithms degrades over time as their operational environments change. Numerous well-known algorithms are used to overcome this limitation [1], such as DELM, DBN, and SVM algorithms, and the stacked generalization [8] technique has been proposed to improve the accuracy in prediction and classification.
When this machine-learning technique [9] is attached to an algorithm, the performance may improve in different ways. We applied these three algorithms as a learning unit to the α-β filter to boost the performance of the filters. The alpha-beta filter algorithm is the most basic linear observer method, and it is utilized to evaluate problem and application control. We obtained this method from the Kalman filter. The Kalman filter [10] is the most complex method; in comparison, it is easy to use the alpha-beta filter in different applications. It also displays a better performance than other filters.
In this paper, new ML-based algorithms were implemented, and three different combinations of these algorithms were used to enhance the accuracy of the α-β filter algorithm and to tune the parameters of α and β in dynamic conditions. The proposed system uses the deep belief network, support vector machine, and deep extreme learning machine as three different learning algorithms. Then these learned parameters are trained by the machine-learning algorithms, tuned to the α-β filter algorithm as a prediction module, and provide the final predicted results. The MAE and RMSE were used to calculate the performance of the proposed α-β filter with different learning algorithms. As evaluated by comparing the proposed algorithm with the typical alpha-beta filter algorithm, it was shown that the proposed system provides results with better accuracy. Finally, we compared the results of the DELM, DBN, and SVM. We concluded that when the DBN was attached to the alpha-beta filter, and the performance was very high compared to the other algorithms. Figure 1 shows the conceptual model of the alpha-beta filter model. Each algorithm recorded different best-case accuracy results; for the DBN, we achieved 3.60 and 2.61; for the DELM, we obtained best-case results of 3.90 and 2.81; and finally, for the SVM, 4.0 and 3.21 were attained in terms of the RMSE and MAE, respectively, as compared to 5.21 and 3.95. In summary, the main contributions of our paper are as follows:
  • The development of a deep-belief-network-based alpha-beta filter;
  • The development of an SVM-based alpha-beta filter;
  • The development of a DELM-based alpha-beta filter;
  • The performance evaluation of both conventional algorithms and the proposed algorithm;
  • A shift from the static approach to a dynamic approach.
The paper is organized as follows: In Section 2, related work regarding similar filters is discussed in detail. Section 3 sheds light on the proposed methodology of the different learning methods for the alpha-beta filter, including the DELM, DBN, and SVM. In Section 4 and Section 5, the implementation and results are discussed briefly. Finally, Section 6 concludes the paper. The abbreviations used in this paper are defined in Table 1.

2. Related Work

We carried out a broad review of the literature on several renowned performance evaluation and prediction techniques. In recent years, researchers have proposed several techniques to develop and enhance filtering mathematics, and these enhanced models are used in different practical application areas.
Discrete data are commonly applied to predict the kinematics of moving objects. These data are related to air traffic control, antisubmarine warfare, missile interceptions, and similar distinguished applications. Methods have been specially designed for radar tracing to estimate velocity and positioning based on noisy data. Some of the filters from this family have been used for applications other than enabling accurate tracking and prediction, comparable to the Kalman filter [11,12,13,14,15]. The author of Reference [16] used the filters to make predictions and track rate fluctuations. Tracking parameters were also a topic of interest for the author of Reference [17]. The author of Reference [18] aimed to improve the efficiency of the filter family and introduced a new method to improve their performance. Their contribution also shed light on how balanced α-β filters can be developed in terms of performance.
Moreover, α-β-γ filters have been applied in computer vision for different applications. The author of Reference [19] compared the α-β-γ filter’s performance with that of the Kalman filter. They observed that the Kalman filter converged coefficients to approximately constant levels, and for this reason, the computation of the filter was shown to be ineffective. Due to this inefficiency, the α-β filter gave a good performance, and these promising results show that, compared with the Kalman filter, this filter has less computational time requirements. The author of Reference [20] also implemented similar filters to forecast target locations within an image plane. The target’s position was captured in every iteration to predict one-step-forward locations by putting the data into an α-β-γ filter. By testing and using this method, the tracking performance was shown to be effective and improved. The author of Reference [21] compared the α-β-γ filter and the Kalman filter in terms of their performance. They refined the filter parameter values to prove that the α-β-γ filter performs better than the Kalman filter. The author of Reference [22] proposed the cascaded Proportional Integrative Derivative (PID) law to control motor positions, using an α-β-γ filter. This method using α-β filters has a new design and procedure and obtains more precise results.
Recently, the author of Reference [23] proposed a genetic-based algorithm to determine the best parameter values of the α-β-γ filter. The noise in the filter was at acceptable levels, and they also achieved a better improvement in performance.
The author of Reference [24] proposed a perfect model to track accuracy enhancement, using the α-β-γ-δ filter; this method is named “the third order filter” (in which four parameters are used, i.e., the α-β-γ-δ filter). The third temporal derivative of interest value is the added value (called “jerk”). The filter used for tracking can also forecast the second-order derivative value of interest. In conclusion, by using these second- and third-order derivatives, the author claimed that the tracking filter accuracy was notably improved. In Reference [14], a feed-forward backpropagation neural network model with the function tan-sigmoid and linear function was developed to forecast the consumption of energy in smart houses. In Reference [21], another method was proposed, using a related type of predictive model and multilayer perceptron (MLP) for short-term energy usage. The Levenberg–Marquardt backpropagation algorithms and a scaled conjugate gradient was used. In Reference [22], an ANN energy prediction technique for smart homes was proposed to forecast energy consumption at different time periods (hour, day, week, and year). This strong NN and pro-energy system assists in forecasting and assembling the energy capacity. The Taguchi technique has been used to estimate the influence of data on the energy capacity [23]. The author of Reference [24] proposed another effective mixture method utilized by an auto-regressive integrated moving (ARIM) average for energy prediction. The author of Reference [25] used composite ANN and PSO algorithms for the optimization of the energy consumption of electric apparatus. This method is based on the IoT management of home energy systems (HEMs) for smart homes. The HEM algorithm presented in Reference [26] is also a helpful method for use in the optimization energy consumption and prediction.
The author of Reference [2] developed a method based on the DELM to improve the accuracy of the α-β filter algorithm. For instance, the author of Reference [25] used the bat algorithm and the alpha-beta filter algorithm to determine parameter preferences and optimize the consumption of energy in smart homes, and the deep ELM was also used. The author of Reference [26] proposed a method named the adaptive alpha-beta filter and attached it to the robust BPNN, and this method was used for innovative target tracking. The author of Reference [27] proposed a comparison-based method to solve the threshold problem, using the alpha-beta family of filters. When working with different devices to improve living standards, we set a threshold, i.e., positive or negative. However, this does not mean that if the value is positive, the living standards are improved, or if the value is negative, the living standards are worsened. Instead, error values were set to enable comparisons, and the alpha-beta filter family was used to achieve the best solution for this problem.
The author of Reference [28] proposed a new accuracy-improvement-based methodology for the alpha-beta filter. They developed a method based on ANN learning in prediction methods for indoor navigation systems to enhance the precision of the alpha-beta filter by reducing the error. The ANN [29] is also the most widely used algorithm.

3. Proposed Schemes

We proposed different algorithms, including DELM, DBN, and SVM, and added them to the α-β filter. These three machine-learning algorithms were used to evaluate improvements in accuracy and performance, and we compared their results with the conventional α-β filter. Usually, historical data are used to train forecasting algorithms; these training processes are performed to determine the relationships and hidden relationships between input and output values. Next, the input data are used to train a model; therefore, the purpose of input data is to predict outputs. Predictive algorithms perform well when the training-data environments are similar to the input data and application settings. However, conventional prediction algorithms do not allow for variations in trained models under varying dynamic input situations.
To deal with such problems, we proposed different learning models for application to prediction models. The learning models used DELM, DBN, and SVM. The prediction model used the α-β filter algorithm. These learning modules train data and tune in a prediction model to increase the prediction precision and the performance of the algorithm. The complete conceptual model is shown in Figure 1. In our proposed design, the learning module serves as a monitor for the retrieval of the output of the forecasting algorithm as a response, and it constantly observes the performance of the forecasting algorithm. The learning unit may also consider external constraints that may influence the α-β filter algorithm’s performance. The output of the prediction algorithm also analyzes the current external factors; the tunable parameters may need to be updated by the learning module to the prediction module or when environmental triggers are detected. The learning unit completely replaces the proficient model in the prediction algorithm to increase its performance with regard to prediction accuracy. The complete architecture of our proposed scheme is shown in Figure 2.

3.1. The Deep Belief Network (DBN)-Based Learning Model

We proposed a deep-belief-network-based (DBN) alpha-beta filter. The model was first developed by Hinton [30]. It is also a very popular prediction method which uses real-world data to predict unseen data. To boost the model performance, we combined this model with other typical time-series prototypes. The model contains different learning components with less complexity and restructures the inputs concerning the probability. The model is mainly based on the restricted Boltzmann machine (RBM) and pre-trains the network with the help of unsupervised learning for every couple of layers. The model of the RBM comprises a hidden layer, Boolean hidden units, a visible layer, and a two-fold-layer neural network.
In the learning module, we used the deep belief network. Inputs to the DBN are the temperature-sensor values and humidity-sensor values. The outputs of the DBN are the alpha-beta values in the prediction module. This model comprises two inputs, and the probability distribution can learn various sets of data. Furthermore, the model consists of a visible layer of units with symmetrical connections, and the model has only one HL of hidden components. Inside the same layer, there are no interconnections. However, it still requires a bipartite graph and the development of its neurons. The hidden-layer probability distribution and layer-wise configuration of the learning process [31] are displayed as follows:
E E ( v , h ) = i = 1 n v a i v i y = 1 n h b y h y i = 1 n v · y = 1 n h h y w y , i v i
P ( v , h ) = ( e E ( v , h ) Σ ν Σ h e E ( v , h ) )
Meanwhile, in the visible layer, vi is the ith neuron binary state. The other representations are as follows: The visible layer, nv, is the number of neurons. The yth is the Boolean neuron inside the hidden layer of the binary state, and h y is the number of neurons, nn, in the hidden layer. The wy and i are the weight matrices between the HL and the visible layer. The ai is the bias vector for the hidden layers and the visible layer, by:
P ( h i = 1 | v ) = σ ( b j + j = 1 n v ω j ,   i v i )
P ( v i = 1 | h ) = σ ( a i + j = 1 n h ω j ,   i h j )
The above two equations symbolize the activation functions of the HL and the visible layer, where the sigmoid activation function [2] is referred to as σ.
Now, we explain how the final output predicts the DBN regression model, and different models are created to train each terminal node. Normalization was completed for each feature’s train and test datasets to adjust the original data between 0 and 1. The Min–Max scaler function was used, and it was fed into the training model. The new input feature xi means the original value; the input feature minimum value is min(x), and max(x) represents the maximum value, which is the new rescaled value of xi.
The prediction performance for the test data was measured by the error metrics. Equation (5) shows how the data are normalized:
n e w   ϰ i = x i min ( x ) max ( x ) min ( x )
In the prediction module, we used the alpha-beta filter. The inputs to the filter are the temperature sensor values. The sensor-reading module obtains the sensor-temperature-reading data and inputs it to the compute delta temperature module. The compute delta temperature module outputs are used as inputs to the predicted actual temperature module and updated velocity. The predicted accurate temperature and initial state module values are input to the previous updated state module. The previous updated velocity module takes the updated value module and the initial velocity state values as inputs. The predicted actual temperature module takes the computed delta temperature, the estimated temperature, and the alpha values as inputs and produces the actual temperature value. The structure model of the deep belief network is shown in Figure 3.

3.2. Deep Extreme Learning Machine (DELM)-Based Model

In the proposed method, the beneficial features of DL and the ELM are combined, and the approach is called the DELM. Figure 4 shows the model of the DELM; the DELM model uses an input layer of two neurons, and we used five hidden layers and an individual HL consisting of twelve neurons and two output layers.
We trained the DELM on historical data to improve the algorithm by taking a different combination of training and testing samples and validating the output results with input data samples. In the training unit of the proposed model, two input parameters were taken, i.e., temperature and humidity values, by a DELM. The model output was also alpha-beta filter values, which were given to the prediction unit as input values. The DELM worked by tuning the parameters to the alpha-beta filter, addressing estimated errors in sensor readings, and intelligently updating the alpha and beta values. The alpha-beta performance was continuously monitored by examining its output results in the training unit.
The alpha values (current temperature values) and the beta values (humidity values) were input into the deep extreme learning machine to the prediction algorithm unit. The prediction algorithm unit was based on alpha and beta filter values, and these values were taken as inputs to predict the desired temperature. The alpha-beta filter does not require all the historical input values, and only the prior outcome values help the algorithm become more intelligent. The system determines the actual state information because of the prior state values; the algorithm is lightweight [11]. In the current research, we tested the dataset with temperature values from noisy temperature-sensor-reading data on the alpha-beta filter.
In comparison, noise is always dependent on temperature-sensor readings and other conditions, and the temperature is deeply dependent on the humidity level and is always affected by increases or decreases in humidity in environments. When a filter reads the temperature sensor and removes noise from incoming data, it acquires the temperature with regard to time, T, and estimates the accurate temperature. The performance is mainly measured by tuning the alpha (temperature) and beta (humidity level) parameters in the alpha-beta filter. The tuned parameters are always updated after every iteration. The structure of the DELM is shown in Figure 4.

Deep Extreme Learning Machine

The deep ELM is a renowned and exciting method which combines the extreme learning machine (ELM) and deep learning. The standard ANN algorithm needs to be trained more for extensive data, involves extra time consumption, has an inexpensive learning rate, and sometimes may lead to overfitting of the model [2]. The ELM method has been used in classification and regression tasks because of its efficiency; this technique is computationally cheap, and the learning rate is speedy. The model comprises two input layers and five hidden layers, with each HL consisting of 10 neurons and two output layers.
Initially, input feature, A = [ a k 1   a k 2   a k 3     a k Z ] ; a training sample, [ A , B ] = { a k ,   b k ,   } ( i = 1 , 2 , . , Z ) ; and a targeted matrix, B = [ b l 1   b l 2   b l 3     b l Z ] were taken. The matrices A and B can be described by Equations (6) and (7), respectively. The terms “a and b” denote the feature of the input and output matrix, and the weights between the input layer and the hidden layer were adjusted arbitrarily by the ELM, while the weights between the kth input layer nodes and lth hidden layer nodes are represented by w k l , as shown in Equation (8). Furthermore, the HL and output layer neurons’ weights are randomly fixed by ELM and are given in Equation (9), and the weight between the input and HL nodes is symbolized by γ k l :
A = [ a 11           a 12   .     .     .       a 1 z a 21           a 22   .     .     .       a 2 z a 31           a 32   .     .     .       a 3 z                                                                           a p 1           a p 2   .     .     .       a p z       ]  
B = [ b 11           b 12   .     .     .       b 1 z b 21           b 22   .     .     .       b 2 z b 31           b 32   .     .     .       b 3 z                                                                           b r 1           b r 2   .     .     .       b r z       ]  
w = [ w 11           w 12   .     .     .       w 1 p w 21           w 22   .     .     .       w 2 p w 31           w 32   .     .     .       w 3 p                                                                           w i 1           w i 2   .     .     .       w i p       ]
γ = [ γ 11           γ 12   .     .     .       γ 1 r γ 21           γ 22   .     .     .       γ 2 r γ 31           γ 32   .     .     .       γ 3 r                                                                           γ p 1           γ p 2   .     .     .       γ p r       ]
Then, in the hidden layers, the biases are arbitrarily selected by the extreme learning machine, given by Equation (10). Furthermore, the ELM calculates the g ( x ) function, which is the activation function used for the ELM. Equation (11) describes the resultant matrix, and the column vector resultant matrix, T, is depicted in Equation (12):
B = [ b 1 ,   b 2 ,   b 3 ,   , b p ] T
V = [ v 1 ,   v 2   ,   v 3 ,   ,   v Z ] r × Z
v l = [   v 1 j v 2 j t 3 j t r j   ] = [ l = 1 q γ k 1 g ( w k a l + b k ) l = 1 q γ k 2 g ( w k a l + b k ) l = 1 q γ k 3 g ( w k a l + b k ) l = 1 q γ k r g ( w k a l + b k )   ] ( l = 1 , 2 , 3 , .   .   .   , n   )
If we compute Equations (11) and (12), the desired values are attained in Equation (13). H denotes the hidden layer output, and the transpose of V is written as V . Meanwhile, the least squares method was used to simplify the weight matrix parameters, which are denoted by γ, as shown in Equation (14) [2]:
H γ = V
γ = H + V
The regularization [32] γ values were used to further generalize and stabilize the network. The trial-and-error method was selected, because no specific methods are used to recognize the number of hidden layers neurons, and this method also works for the selection of the number of nodes that must be chosen. The output neuron in the second hidden layer is calculated by Equation (15):
H 1 = V γ +
where γ + is generally the inverse of matrix y and the result of the HL-2 can usually be computed by (16):
H 2 = V γ +
g ( W 1 H + B 1 ) = H 1  
The parameters used in Equation (17) are defined as follows: W 1 signifies the weight matrix of the initial two HLs, while H represents HL. The probable outputs of the first and second HL are denoted by H 1 and bias B 1 :
  W H E = g 1 ( H 1 ) H E +
where H E + is the inverse of H E , and AF is denoted by g(x), as shown in Equation (18), and updates the expected output of HL2. We identified any suitable AF g ( x ) , as shown below:
  H 2 = g ( W H E H E )
The update to γ , which is the weight matrix between HL2 and HL3, is presented in Equation (20), and H 2 + is the inverse of H 2 ; the final result of HL3 is given in Equation (21):
γ n e w = H 2 + V
H 3 = V γ n e w +
where γ n e w is the weight matrix, and its inverse is written as V γ n e w + . The deep ELM classifies the matrix W H E 1 = [ B 2 ,   W 2 ] , and the final results of the third layer are calculated by using the equations shown above ((13) and (14)):
H 3 = g 1 ( H 2 W 2 + B 2 )   = g ( W H E 1 H E 1 )
W H E 1 = γ 1 ( H 3 ) H E 1 + )
For the rest of the layers, g ( x ) is the activation function (AF), and g 1 ( x ) is its inverse AF, and the second hidden layer is denoted by H 2 . The weights between the hidden layer 2 and layer 3 are represented by W 2 , where B 2 signifies the bias. The H E 1 inverse is characterized by H E 1 + , and Equation (24) represents the sigmoid function. Now, we compute the output of the third HL, as given in Equation (25):
g ( x ) = 1 1 + e x
H 3 = g ( W H E 1 H E 1 )  
Regarding the third HL and the ending layer output, the resultant weighted matrix is computed in Equation (26), and the expected output of the third HL is shown in Equation (27). The other hidden-layer calculations follow the same procedure as H 5 , H 6 ,   H 7 , and so on:
γ n e w = H 4 T ( 1 λ + H 4 T   H 4 ) 1 V
H 4 = V γ n e w +

3.3. SVM-Based Learning Module

The third algorithm used is also a prevalent method, and we proposed a support vector machine (SVM) method that was primarily created for use in binary classification tasks. Initially, we trained the SVM on historical data in the training unit by taking a different combination of training and testing samples and then validating the output results with input data samples. In the training unit of the proposed model, we took two input parameters, i.e., temperature values and humidity values, via a support vector machine (SVM). The SVM output was also alpha-beta filter values, which were input into the prediction unit as input values. The SVM worked by tuning the parameters to the alpha-beta filter, continuously trying to address the assessed errors in sensor readings, and intelligently updating alpha and beta values. The performance of the alpha-beta filter was continuously monitored by examining its output results in the training unit. Today, many variations of the SVM have been developed to resolve more dense classification and regression problems [2] with the help of kernel tricks. SVMs always rely on the input size and the nature of the problem, and a suitable kernel function can be selected from radial basis functions (RBFs), linear functions, polynomial functions, etc., For the selection of kernels, we performed experiments by using three different kernels. The results attained through the use of a linear kernel were the best and are reported in this paper. The structure of the proposed SVM-based alpha-beta filter is shown in Figure 5.

3.4. Alpha-Beta Filter

Filters belonging to the alpha-beta family of filters are the simplest filters used for smoothness, control, and estimation. Their architecture is similar to linear filters. The main benefit of the alpha-beta filter is that it is very easy to use because a complex benchmark is not required in order to train it for another algorithm. The model is obtained from the KF [2]. The alpha-beta filter needs very little space and computation power compared to the Kalman filter. The equations shown below are mathematical calculations which define each algorithm step. In the first step, initiation occurs, as represented in Equations (28) and (29):
x k 1 = c 1
v k 1 = c 2  
Equation (30) was applied to update the position, and to read sensor data, Equation (31) was used:
x ^ k = x k 1 + v k 1 ·   Δ t  
      x j = S e n s o r ( )  
To compute the difference, Equation (32) was used, and for the calculation and prediction of positions, Equation (33) was applied:
Δ x k ˜ = x j x ^ k
x k ˜ = x k ˜ + α · Δ x k ˜
To calculate predicted velocity, Equation (34) was applied, and to update the position and velocity for the next iteration, we used Equations (35) and (36):
v ^ k = v ^ k 1 + β · Δ x k ˜ / Δ t  
    i   k 1 = i k  
v k 1 = v k  

4. Implementation and Performance Evaluation

In this section, we discuss the implementation of the proposed method and the final results.

4.1. Implementation

Experiments regarding a deep-belief-network-based α-β filter, a deep-extreme-learning-machine-based α-β filter, and a support-vector-machine-based alpha-beta filter were carried out on MSI DESKTOP-SC4U005. The computer has an “11th Gen Intel(R) Core (TM) i7-11700KF CPU @ 3.60 GHz, 32 GB ram, Nvidia Quadro M1200 4 GB graphics card, and MATLAB R2022a”. The implementation and simulation configuration are shown in Table 2. For the analysis of our models and the performance assessment of the multi-learning-algorithm-based α-β filter, we used the real weather dataset [2] of Korea and the data gathered by temperature and humidity sensors over three years, comprising hourly data with simulated noisy sensor readings, and some errors were added to the data. The total number of days for three years was 365 × 3 = 1095, and there were 26,280 total data instances. Initially, when the filter-readings sensor obtained values via the typical method, the root mean square error, a value of 5.21, was obtained, which is very high in terms of the RMSE. By using these three algorithms, we attempted to reduce the error. The data representation is shown in Figure 6.

4.2. Performance Criterion

The overall performance of the proposed approach was comprehensively assessed by using two deterministic performance metrics, i.e., RMSE and mean absolute error, and a heatmap representation of the correlation between the training and testing datasets was created as shown in Figure 7.
RMSE = 1 N i = 0 n ( R P j ) 2
MAE = 1 N i = 1 n | R j P j |
K =   ( x x ¯ ) ( y y ¯ ) ( x x ¯ ) 2 Σ ( y y ¯ ) 2

5. Results and Discussion

In this section, we briefly discuss the multi-algorithm-based alpha-beta filter. The proposed hybrid method was based on the DBN, SVM, and DELM. Figure 8 shows the three different cross-validation paradigms used in training and testing.

5.1. DBN-Based Alpha-Beta Filter Results

The DBN learning model was constructed by using training and testing sets of 70/30. The input to the DBN was composed of temperature values and humidity values, and two output data were used. The DBN was composed of two steps. In the first step, we used supervised learning to construct the RBM network. The RBM settings were a batch size of 12, momentum of 0, and 300 epochs. In the second step, the RBM network learned the backpropagation algorithm of supervised learning. The backpropagation used a batch size of 12 and 300 epochs. The DBN was used for the α-β filter algorithm to learn from, and the learned values of the DBN were passed through to the α-β filter to produce the final output values. The mean-absolute-error and root-mean-square-error values were estimated for the typical alpha-beta filter and the proposed DBN-based α-β filter to evaluate their performance. Regarding the typical method, we obtained an RMSE of 5.216 and an MAE of 3.951 when we applied the DBN with two inputs, two outputs, and the RBM. The three-fold cross-validation method was used to comprehensively calculate the performance. The best results were an RMSE of 3.605 and an MAE of 2.610. The results are shown in Table 3. The performance of our model varied depending on the number of nodes constituting the DBN.

5.2. DELM-Based Alpha-Beta Filter Results

We implemented a deep-extreme-learning-machine-based α-β filter; here, the same training and test set was used: 30/70. The input to the DELM was composed of temperature sensor values and humidity sensor values, there were five hidden layers, and each hidden layer was composed of 12 neurons in the hidden layer and a sigmoid activation function (AF); the linear AF was used by the output layer. In our research, we used two output layers and named them alpha and beta in the final prediction results; these two values were input to the alpha-beta filter to compute the final output results. To evaluate the performance of the DELM-based alpha-beta filter, the root mean square error and mean absolute error were estimated regarding the typical alpha-beta filter and the newly proposed DELM-based α-β filter. When we applied the DELM, the result was overwhelmingly positive. The three-fold cross-validation method was used to comprehensively evaluate the performance. The best results were an RMSE of 3.901 and an MAE of 2.811. The DELM-based results are shown in Figure 9.

5.3. SVM-Based Alpha-Beta Filter Results

Figure 10 shows the sample data used for the SVM-based alpha-beta filter algorithm. The SVM took two inputs, i.e., temperature and humidity, with ten support vectors, and these support vectors were used to map functions and apply the kernel transformation. The final values were input into the α-β filter to produce the output results. However, this method is complex and did not produce a good output performance. Sample data used for SVM-based alpha-beta filter (200 data instances) is shown in Figure 11.
For the performance assessment of the SVM-based α-β filter, the mean absolute error and root mean square error of the typical alpha-beta filter and the newly proposed SVM-based α-β filter were estimated. When we applied the SVM, the result was overwhelmingly positive. The three-fold cross-validation method was used to comprehensively calculate the performance. The best results were an RMSE of 4.015 and an MAE of 3.218. The SVM-based alpha-beta filter results are shown in Figure 10.
Table 3 shows comparisons of the different algorithms’ performances; we reached the conclusion that the DBN algorithm prevails compared to the DELM and SVM when applied to the alpha-beta filter. Figure 12 and Figure 13 show the comparison of the typical alpha-beta filter and the proposed multi-method in terms of RMSE and MAE.
Table 3. The comparison of typical alpha-beta filter and proposed multi-method in terms of RMSE and MAE.
Table 3. The comparison of typical alpha-beta filter and proposed multi-method in terms of RMSE and MAE.
Performance Metricsα-β Filter without Learning Algorithmα-β Filter with DBNα-β Filter with DELMα-β Filter with SVM
RMSE 15.2163.6053.9014.015
MAE 13.9512.6102.8113.218
1 Root mean square error (RMSE), Mean absolute error(MAE).

6. Conclusions

It is always difficult to enhance the prediction accuracy of an algorithm. In this paper, we proposed a new learning model for prediction algorithms, including DBN-, DELM-, and SVM-based α-β filter algorithms, used to enhance the accuracy and tune the parameters of α and β in dynamic conditions. The design of the proposed system uses the deep belief network, the support vector machine, and the deep extreme learning machine as learning algorithms for the alpha-beta filter algorithm to increase the prediction accuracy of the filter. The performance of the proposed α-β filter with the application of different learning algorithms was evaluated by using MAE and RMSE values. We compared the results of these four algorithms used in the alpha-beta filter algorithm, and the results with the best accuracy were from ANN, with an RMSE of 3.605 and an MAE of 2.610, as compared with the typical alpha-beta filter algorithm, which achieved an RMSE of 5.216 and an MAE of 3.951 in terms of RMSE and MAE.

Author Contributions

Data curation, J.K.; formal analysis, J.K.; funding acquisition, K.K.; investigation, J.K.; methodology, J.K.; software, J.K.; supervision, K.K.; validation, K.K.; review and editing, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT) (No.2020-0-01441, Artificial Intelligence Convergence Research Center(Chungnam National University)).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This work was supported by research fund of Chungnam National University. This work was supported by Institute of Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea government(MSIT)(No.2020-0-01441, Artificial Intelligence Convergence Research Center(Chungnam National University)).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sklansky, J. Optimizing the Dynamic Parameter of a Track-While-Scan System; RCA Laboratories: Princeton, NJ, USA, 1957. [Google Scholar]
  2. Khan, J.; Fayaz, M.; Hussain, A.; Khalid, S.; Mashwani, W.K.; Gwak, J. An improved alpha beta filter using a deep extreme learning machine. IEEE Access 2021, 9, 61548–61564. [Google Scholar] [CrossRef]
  3. Ullah, I.; Fayaz, M.; Kim, D. Improving accuracy of the Kalman filter algorithm in dynamic conditions using ANN-based learning module. Symmetry 2019, 11, 94. [Google Scholar] [CrossRef]
  4. Phyo, P.P.; Jeenanunta, C. Daily Load Forecasting Based on a Combination of Classification and Regression Tree and Deep Belief Network. IEEE Access 2021, 9, 152226–152242. [Google Scholar] [CrossRef]
  5. Fayaz, M.; Kim, D. A prediction methodology of energy consumption based on deep extreme learning machine and comparative analysis in residential buildings. Electronics 2018, 7, 222. [Google Scholar] [CrossRef]
  6. Lewis, R.J. An introduction to classification and regression tree (CART) analysis. In Proceedings of the Annual Meeting of the Society for Academic Emergency Medicine, San Francisco, CA, USA, 22–25 May 2000; Volume 14. [Google Scholar]
  7. Osuna, E.; Freund, R.; Girosi, F. An improved training algorithm for support vector machines. In Neural Networks for Signal Processing VII, Proceedings of the 1997 IEEE Signal Processing Society Workshop, Amelia Island, FL, USA, 24–26 September 1997; IEEE: Piscataway, NJ, USA, 1997; pp. 276–285. [Google Scholar]
  8. Ting, K.M.; Witten, I.H. Stacked Generalization: When Does It Work? Department of Computer Science, University of Waikato: Hamilton, New Zealand, 1997. [Google Scholar]
  9. Singh, Y.; Bhatia, P.K.; Sangwan, O. A review of studies on machine learning techniques. Int. J. Comput. Sci. Secur. 2007, 1, 70–84. [Google Scholar]
  10. Haykin, S. Kalman Filtering and Neural Networks; John Wiley & Sons: Hoboken, NJ, USA, 2004; Volume 47. [Google Scholar]
  11. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. Mar. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  12. Matthies, L.; Kanade, T.; Szeliski, R. Kalman filter-based algorithms for estimating depth from image sequences. Int. J. Comput. Vis. 1989, 3, 209–238. [Google Scholar] [CrossRef]
  13. Bishop, G.; Welch, G. An introduction to the kalman filter. Proc. SIGGRAPH Course 2001, 8, 41. [Google Scholar]
  14. Mohamed, A.H.; Schwarz, K.P. Adaptive Kalman filtering for INS/GPS. J. Geod. 1999, 73, 193–203. [Google Scholar] [CrossRef]
  15. Perez, E.; Barros, J. An extended Kalman filtering approach for detection and analysis of voltage dips in power systems. Electr. Power Syst. Res. 2008, 78, 618–625. [Google Scholar] [CrossRef]
  16. Kalata, P.R.; Murphy, K.M. /spl alpha/-/spl beta/target tracking with track rate variations. In Proceedings of the Twenty-Ninth Southeastern Symposium on System Theory, Cookeville, TN, USA, 9–11 March 1997; pp. 70–74. [Google Scholar]
  17. Kalata, P.R. The tracking index: A generalized parameter for α-β and α-β-γ target trackers. IEEE Trans. Aerosp. Electron. Syst. 1984, AES-20, 174–182. [Google Scholar] [CrossRef]
  18. Tenne, D.; Singh, T. Optimal design of/spl alpha/-/spl beta/-/spl gamma/(/spl gamma/) filters. In Proceedings of the 2000 American Control Conference, ACC (IEEE Cat. No. 00CH36334), Chicago, IL, USA, 28–30 June 2000; Volume 6, pp. 4348–4352. [Google Scholar]
  19. Corke, P.I.; Good, M.C. Dynamic effects in high-performance visual servoing. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation, Nice, France, 12–14 May 1992; pp. 1838–1839. [Google Scholar]
  20. Stanciu, R.; Oh, P.Y. Human-in-the-loop camera control for a mechatronic broadcast boom. IEEE/ASME Trans. Mechatron. 2007, 12, 41–52. [Google Scholar] [CrossRef]
  21. Stanciu, I.R.; Molnar-Matei, F. Detecting power voltage dips using tracking filters-a comparison against Kalman. Adv. Electr. Comput. Eng. 2012, 12, 77–82. [Google Scholar] [CrossRef]
  22. Ng, K.H.; Yeong, C.F.; Su, E.L.M.; Wong, L.X. Alpha beta gamma filter for cascaded PID motor position control. Procedia Eng. 2012, 41, 244–250. [Google Scholar] [CrossRef]
  23. Lee, T.E.; Su, J.P.; Yu, K.W. Parameter optimization for a third-order sampled-data tracker. In Proceedings of the Second International Conference on Innovative Computing, Informatio and Control (ICICIC 2007), Kumamoto, Japan, 5–7 September 2007; p. 336. [Google Scholar]
  24. Wu, C.M.; Chang, C.K.; Chu, T.T. An optimal design of target tracker by α-β-γ-δ filter with genetic algorithm. Chin. J. Mech. Eng. 2009, 30, 467–474. [Google Scholar]
  25. Shah, A.S.; Nasir, H.; Fayaz, M.; Lajis, A.; Ullah, I.; Shah, A. Dynamic user preference parameters selection and energy consumption optimization for smart homes using deep extreme learning machine and bat algorithm. IEEE Access 2020, 8, 204744–204762. [Google Scholar] [CrossRef]
  26. Hasan, A.H.; Grachev, A.N. Adaptive α-β-filter for target tracking using real time genetic algorithm. J. Electr. Control. Eng. 2013, 3, 203. [Google Scholar]
  27. Sighencea, B.I.; Stanciu, R.I.; Șorândaru, C.; Căleanu, C.D. The Alpha-Beta Family of Filters to Solve the Threshold Problem: A Comparison. Mathematics 2022, 10, 880. [Google Scholar] [CrossRef]
  28. Jamil, F.; Kim, D.H. Improving accuracy of the alpha–beta filter algorithm using an ANN-based learning mechanism in indoor navigation system. Sensors 2019, 19, 3946. [Google Scholar] [CrossRef]
  29. Qureshi, M.S.; Aljarbouh, A.; Fayaz, M.; Qureshi, M.B.; Mashwani, W.K.; Khan, J. An Efficient Methodology for Water Supply Pipeline Risk Index Prediction for Avoiding Accidental Losses. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 385–393. [Google Scholar] [CrossRef]
  30. Xu, W.; Peng, H.; Zeng, X.; Zhou, F.; Tian, X.; Peng, X. Deep belief network-based AR model for nonlinear time series forecasting. Appl. Soft Comput. 2019, 77, 605–621. [Google Scholar] [CrossRef]
  31. Ouyang, T.; He, Y.; Li, H.; Sun, Z.; Baek, S. Modeling and forecasting short-term power load with copula model and deep belief network. IEEE Trans. Emerg. Top. Comput. Intell. 2019, 3, 127–136. [Google Scholar] [CrossRef]
  32. Ding, S.; Guo, L.; Hou, Y. Extreme learning machine with kernel model based on deep learning. Neural Comput. Appl. 2017, 28, 1975–1984. [Google Scholar] [CrossRef]
Figure 1. The conceptual model of the proposed learning algorithm for the alpha-beta filter model.
Figure 1. The conceptual model of the proposed learning algorithm for the alpha-beta filter model.
Applsci 12 09429 g001
Figure 2. The complete architecture of our proposed scheme.
Figure 2. The complete architecture of our proposed scheme.
Applsci 12 09429 g002
Figure 3. The structure model of deep belief network with alpha-beta filter.
Figure 3. The structure model of deep belief network with alpha-beta filter.
Applsci 12 09429 g003
Figure 4. Structure of DELM with alpha-beta filter.
Figure 4. Structure of DELM with alpha-beta filter.
Applsci 12 09429 g004
Figure 5. The structure of the proposed SVM-based alpha-beta filter.
Figure 5. The structure of the proposed SVM-based alpha-beta filter.
Applsci 12 09429 g005
Figure 6. Representation of humidity, sensor readings, temperature, and error.
Figure 6. Representation of humidity, sensor readings, temperature, and error.
Applsci 12 09429 g006
Figure 7. Heatmap representation of the correlation between training and testing datasets.
Figure 7. Heatmap representation of the correlation between training and testing datasets.
Applsci 12 09429 g007
Figure 8. Three different cross-validation paradigms used in training and testing.
Figure 8. Three different cross-validation paradigms used in training and testing.
Applsci 12 09429 g008
Figure 9. The deep-extreme-learning-machine-based alpha-beta filter results.
Figure 9. The deep-extreme-learning-machine-based alpha-beta filter results.
Applsci 12 09429 g009
Figure 10. SVM-based alpha-beta filter output.
Figure 10. SVM-based alpha-beta filter output.
Applsci 12 09429 g010
Figure 11. Sample data used for SVM-based alpha-beta filter (200 data instances).
Figure 11. Sample data used for SVM-based alpha-beta filter (200 data instances).
Applsci 12 09429 g011
Figure 12. The comparison of typical alpha-beta filter and proposed multi-method in terms of RMSE and MAE.
Figure 12. The comparison of typical alpha-beta filter and proposed multi-method in terms of RMSE and MAE.
Applsci 12 09429 g012
Figure 13. The error minimization of typical alpha-beta filter and proposed multi-method.
Figure 13. The error minimization of typical alpha-beta filter and proposed multi-method.
Applsci 12 09429 g013
Table 1. Notations with description.
Table 1. Notations with description.
AbbreviationsDefinition
DBNDeep belief network
SVMSupport vector machine
ANNArtificial neural network
DELMDeep extreme learning machine
α-βAlpha-beta
RMSERoot mean square error
MAEMean absolute error
KFKalman filter
PIDProportional Integrative Derivative
MLPMultilayer perceptron
BPBackpropagation
HLHidden layer
AFActivation Function
Table 2. Implementation and simulation configuration.
Table 2. Implementation and simulation configuration.
ComponentDescription
Programming toolMATLAB R2021b
Operating systemWindow 11
CPUIntel(R) Core (TM) i7-11700KF CPU@3.60 GHz
Memory32 GB
Learning algorithmsDBN, DELM, and SVM
Target algorithmAlpha-Beta filter
GraphicsNvidia Quadro M1200
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khan, J.; Kim, K. A Performance Evaluation of the Alpha-Beta (α-β) Filter Algorithm with Different Learning Models: DBN, DELM, and SVM. Appl. Sci. 2022, 12, 9429. https://doi.org/10.3390/app12199429

AMA Style

Khan J, Kim K. A Performance Evaluation of the Alpha-Beta (α-β) Filter Algorithm with Different Learning Models: DBN, DELM, and SVM. Applied Sciences. 2022; 12(19):9429. https://doi.org/10.3390/app12199429

Chicago/Turabian Style

Khan, Junaid, and Kyungsup Kim. 2022. "A Performance Evaluation of the Alpha-Beta (α-β) Filter Algorithm with Different Learning Models: DBN, DELM, and SVM" Applied Sciences 12, no. 19: 9429. https://doi.org/10.3390/app12199429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop