Next Article in Journal
Visual Image Dehazing Using Polarimetric Atmospheric Light Estimation
Previous Article in Journal
Phytochemical Composition and Antioxidant Properties of Tambourissa ficus, a Mauritian Endemic Fruit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Influence of Time-Series Length and Hyperparameters on Temporal Convolutional Neural Network Training in Low-Power Battery SOC Estimation

School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang 266100, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 10910; https://doi.org/10.3390/app131910910
Submission received: 28 August 2023 / Revised: 28 September 2023 / Accepted: 29 September 2023 / Published: 1 October 2023

Abstract

:
In battery management systems, state of charge (SOC) estimation is important for ensuring the safety and reliability of batteries. Currently, there are various methods for estimating SOC, and the neural network method is the most popular. However, when the battery’s SOC is low (below 20%), the uncertainty in neural network parameters can lead to significant bias in SOC estimation. To address these problems, this study proposes a method based on genetic algorithm (GA) optimization of a time-serialization convolutional neural network (TSCNN) model. First, the population is initialized according to the optimized hyperparameters of the TSCNN model, whereby the experimental data are converted into time-series data. Subsequently, neural network models are built based on the population, thereby using the effect of the network as the fitness function for GA optimization. Finally, an optimized network structure is obtained for accurate SOC estimation. During the optimization process, the optimized data exhibited abnormal phenomena, usually manifested as exceeding the data limits or being zero. In the past, abnormal data were discarded and new data were regenerated; however, this reduces the correlation between data. Therefore, this study proposes a check function to enhance the correlation between the data, converting abnormal data into normal data by limiting the data range. To the best of our knowledge, it is the first time that a GA is being proposed to optimize the time-series length of a convolutional neural network (CNN) while the neural network parameters are optimized so that the time-series length and neural network parameters achieve the best match. In the experimental results, the maximum error was 4.55% for the dynamic stress test (DST) dataset and 2.58% for the urban dynamometer driving schedule (UDDS) dataset. When the battery SOC was below 20%, the estimation error did not incur a huge error. Therefore, the optimization method proposed for the TSCNN model in this study can effectively improve the accuracy and reliability of SOC estimation in the low-battery state.
Keywords:
TSCNN; SOC; GA

1. Introduction

Numerous studies have been conducted on SOC estimation. Zhang et al. analyzed the application of deep learning in electric vehicle power battery SOC estimation, summarizing the technical process of the deep learning method for lithium battery SOC estimation on commonly used public datasets and the characteristics, advantages, and disadvantages of neural network structures [1]. Qays et al. discussed the latest classification and mathematical models for SOC estimation and envisaged development trends for SOC estimation methods [2]. Wu et al. proposed an online SOC estimation method for lithium-ion batteries based on a simplified electrochemical model (EM), demonstrating the high accuracy of the method in experiments [3]. Ji et al. proposed an online measurement method for battery parameters and SOC estimation using a multi scale, multi-innovation unscented Kalman filter and extended Kalman filter (MIUKF-EKF) to improve the accuracy of SOC real-time estimation [4]. Zhou et al. proposed a battery SOC estimation method based on an extended Kalman filter (EKF) and a neural network (NN), improving the accuracy of SOC estimation under low-battery capacity [5]. Tian et al. proposed an SOC estimation method based on a deep neural network (DNN) and Kalman filters. Their experimental results show that this method can improve the robustness of the SOC estimation [6]. Li et al. proposed an online battery SOC estimation modeling method based on EKF and a back propagation (BP) neural network, which was verified to be practical and reasonable [7]. Xing et al. proposed a model based on combining the double-extended Kalman filter algorithm (DEKF) and BP to estimate and correct SOC in lithium-ion batteries. This method demonstrated good versatility and robustness in experiments [8]. Chen et al. proposed a robust and efficient combined SOC estimation method based on a gated recurrent unit and an adaptive Kalman filter (GRU-AKF), which combines a GRU with a recurrent neural network (RNN) and an AKF. The method showed superior SOC estimation performance and computational efficiency [9]. Fan et al. proposed a long short-term memory (LSTM) neural network combined with an adaptive unscented Kalman filter (AUKF) method to simultaneously estimate SOC and state of energy. The results showed that the method has excellent performance, reduces computational complexity, and improves estimation accuracy [10]. Shu et al. used an LSTM recursive neural network to build a battery performance characterization model and a rolling learning method to update the parameters. They then used an improved square-root Kalman filter (SRCKF) to estimate the SOC of the batteries. In experiments, the SOC estimation error was less than 2% [11]. Zhang et al. proposed an improved ampere-hour integration method based on an LSTM network model, and the experimental results showed that the error of the improved ampere-hour integration method for SOC estimation was less than 10% [12]. Zhang et al. proposed a joint state of health and state of charge (SOH-SOC) estimation model based on the grey wolf optimizer and back propagation (GWO-BP) neural network-optimized ampere-hour (Ah) integration method, which combines SOH and SOC estimation. The experimental results showed that the estimation error could be stabilized within 5% [13]. Bhattacharjee et al. proposed a model for SOC estimation based on a deep and wide one-dimensional CNN (1DCNN) and a transfer learning mechanism. The method displayed good estimation accuracy, learning speed, and generalization ability [14] in experiments. Zou et al. proposed a CNN based on the Laplace distribution, combining spatial feature extraction and an integrated attention mechanism to capture the complex time degradation characteristics of batteries. By introducing uncertainty estimation, the reliability and accuracy of SOC estimation were improved [15]. Hannan et al. proposed a deep fully convolutional neural network (FCNN) model to estimate SOC under different driving cycles at constant and varying environmental temperatures. This model uses batch processing operations and constructs a data window to accelerate model training [16]. Herle et al. used the time convolutional network (TCN) method to estimate SOC, using time-series data to capture temporal changes in data and reduce prediction time [17]. Liu et al. proposed a TCN-based method for estimating the SOC of lithium-ion batteries. The TCN directly reflects the voltage, current, and temperature of the battery SOC and demonstrates superior ability in processing lithium-ion battery timing data through a specially designed extended causal convolution structure [18]. Li et al. proposed a fusion convolutional neural network (FCNN) algorithm to estimate battery SOC. The experimental results showed that it can improve the accuracy of SOC estimation [19]. Cui et al. proposed a method based on a CNN-GRU hybrid network to accurately estimate the battery SOC in low-temperature environments. The network exhibits strong generalization ability, high estimation accuracy, and robustness in learning feature parameters and adjusting weights [20]. Fan et al. proposed an SOC estimation method based on the U-Net architecture, processing variable-length input data to output SOC estimates of equal lengths. A symmetrical padding convolutional layer was also proposed to solve the boundary effect of convolutional neural networks (CNNs) and improve the accuracy of edge SOC estimation. Experimental results indicated that the average absolute error of this method was within 1.1% under isothermal conditions [21]. Tao et al. compared four different RNN models for battery SOC estimation to help battery management engineers develop the most appropriate estimation methods [22]. Wang et al. proposed a co-estimation model to estimate battery SOC and capacity. The model uses the least-squares support vector machine (LSSVM) to estimate capacity, thereby inputting the capacity estimation results into the RNN. The moving window method solves the long-term dependency problem in RNN, and the results indicated that the model performs well [23]. Gong et al. proposed a battery SOC estimation method based on voltage and stress measurements combined with the LSTM neural network, achieving a maximum absolute error (MAE) and root mean square error (RMSE) within 0.34% and 0.45%, respectively [24]. Qian et al. proposed a dual-input neural network that combines the GRU and fully connected layers for battery SOC estimation. Experiments showed that this network can provide a more accurate SOC estimation throughout the battery life cycle [25]. Li et al. applied a three-dimensional CNN algorithm (3DCNN) for the first time for battery SOC estimation. They further introduced the FCNN algorithm, which improves the accuracy of the SOC estimation by considering the degree of battery aging [26]. Zhang et al. proposed a series of intelligent SOC estimation methods using GA and particle swarm optimization (PSO) algorithms to optimize BP based on the Levenberg–Marquardt (L-M) algorithm, demonstrating an accuracy and robustness higher than those of the EKF method [27]. Fang et al. proposed a model-based battery SOC estimation method using an adaptive genetic algorithm (AGA) for parameter identification and a fractional unscented Kalman filter (FOUKF) for state estimation. The experimental results showed that the AGA-FOUKF algorithm improves the accuracy of SOC estimation [28]. Ma et al. proposed an efficient method to estimate battery SOC using a BP neural network optimized by a multi-population genetic algorithm (MPGA) to compensate for the nonlinear error of the EKF [29]. Chen et al. proposed an SOC estimation method using the fireworks elite genetic algorithm (FEG-BP) to optimize the BP neural network to solve the problems of traditional BP networks, which are prone to falling into local optima and have low accuracy. The experimental results showed that this method can reduce the SOC estimation error to within 3% [30]. Liu et al. proposed an online fusion estimation method based on BP-GA to estimate the SOC and state of health (SOH) of Li-ion batteries. Compared to the estimation results of BP, genetic algorithm back propagation (GABP) has good accuracy and can reduce algorithm complexity and improve detection efficiency [31]. Chen et al. proposed the use of GA to optimize the key parameters of the GRU model for battery SOC estimation. The authors used GA to optimize the number of layers and neurons in the GRU model structure. The results showed that this method is highly accurate and robust [32]. Zhang et al. combined GA with a fuzzy logic control neural network algorithm, utilizing the neural network to accurately estimate the dynamic SOC of the battery based on the static SOC. This research method improves the energy and time efficiencies of the balance system [33].
Although many studies have enhanced the accuracy of battery SOC estimation by improving the structure and parameters of neural networks, several issues remain unaddressed.
  • Previous studies have found that when the battery SOC is at a relatively low level (SOC < 20%), the error in battery SOC estimation increases significantly, affecting the effectiveness of the final model.
  • The different lengths of time-series data often affect the accuracy of battery SOC estimation. Determining the length of the time series and matching it with network parameters has a significant impact on battery SOC estimation.
  • Different neural network model structures and parameters yield different experimental results. The setting of these hyperparameters is typically based on data features and the experience of the researcher. The human involvement ultimately leads to experiential errors, affecting the accuracy of battery SOC estimation.
  • When using GA to optimize the population, individual optimization abnormalities are commonly observed within the population. An approach for solving this issue is to regenerate new individuals. However, this reduces the correlation between new and other individuals, thus slowing the convergence speed.
To solve the aforementioned problems, this study proposes a method for optimizing the parameters of the TSCNN model using GA. First, the population, comprising the hyperparameters to be optimized of the TSCNN model, is initialized. Subsequently, the data are divided into time series, whereby the TSCNN model is built based on the population, using the mean square error (MSE) as the fitness function for GA optimization during TSCNN model training. The check function is used to convert abnormal data into normal data during GA optimization. When TSCNN is used to process time-series data for accurate battery SOC estimation, the input data of the model are richer and display more time-varying characteristics, reducing the huge error generated when the battery SOC is less than 20%. The estimation effect of the TSCNN model is further improved when using GA to optimize the structure and parameters of the TSCNN model. The length of the time series is related to the processing of the data, whereas the structure and parameters of the TSCNN model are used to construct the neural network model. Matching the two to achieve the best effect is the work of the GA, which uses the MSE obtained during TSCNN model training as the fitness function to perform genetic optimization and finally determine the optimal parameter combination. Following GA optimization, a check function is used to check and correct the optimized population, as the optimized data may cause abnormal phenomena (see Section 3.3.2). In particular, when the abnormal data contain zeros, the parameters of the neural network model produce errors because of an inability to use the data (such as the convolutional kernel length, neural data, and so on). In previous studies, a randomly generated usable parameter replaced abnormal data, which reduces the relevance between other parameters. In this study, a check function is used to correct the abnormal data, considering the boundary range of the abnormal data, to ensure maximum relevance between the corrected abnormal data and other parameters.
The DST and UDDS datasets were used to test the proposed method. Compared with traditional BP neural networks, CNN and RNN, the proposed method achieved the best effect in battery SOC estimation using the two datasets, with a maximum error of less than 5% in both datasets. In addition, when the battery SOC was less than 20%, the battery SOC estimation error did not increase but rather decreased, demonstrating the advantages of this method.

2. Data Analysis

Experimental data were obtained from a LANHECT-2001 battery-testing system comprising a constant-temperature cabinet, a control computer, and a Samsung 18650 power lithium-ion battery with nickel–cobalt–manganese (NCM) positive electrode material, as shown in Figure 1. The specifications of the lithium-ion battery are listed in Table 1.
To verify the performance of the proposed algorithm, two discharge conditions were used in the experiments on UDDS and DST. All experiments were conducted at a constant temperature of 20 °C. The battery underwent 10 cycles of discharging from 100% to 0%, with current, voltage, and corresponding SOC recorded every second. Figure 2 shows voltage and battery SOC as a function of time in the UDDS experiment, and Figure 3 illustrates the current and battery SOC changes during the UDDS experiment. As shown in Figure 2, there was a sharp drop in voltage when the battery SOC was below 20%. The DST experiment involved higher discharge currents and longer discharge and resting times. Figure 4 shows the relationship between voltage and SOC during the DST experiment, and Figure 5 shows the relationship between current and SOC during the DST experiment as a function of time. Similar to Figure 2, a significant voltage drop was observed when the battery SOC was below 20% in Figure 4.

3. Networks and Methods

3.1. Data Processing by Time Serialization

This study was conducted on a small amount of data, commensurate with real-world applications. The experimental data included only the voltage and current during battery discharge. The required number of sensors could be reduced using a small number of parameters to estimate the battery SOC, thereby significantly reducing the volume, cost, and failure rate of the battery.
To eliminate the influence of different data ranges on the experimental results, we normalized the data before training the model to standardize the numerical range, thereby improving the training effect. Normalization can improve the convergence speed and accuracy of the neural network model, thus improving the accuracy of battery SOC estimation. The formula is:
x = x x _ min x _ max x _ min  
where x is the original data; x_min and x_max are the minimum and max values, respectively; and x′ is the normalized data.
The “many-to-one” structure SOC prediction model used required that the data be processed into time series. Each time series included L= t − 1 historical data and current data. This approach allows each input sample of the model to contain multiple historical discharge data points, thereby improving the accuracy of model prediction.
As shown in Figure 6, Xt-i denotes the discharge data at a given moment. When the sequence length is set to three, slicing operations are performed to obtain the input samples Xt = [Ut − 2, Ut − 1, Ut; It − 2, It − 1, It] such that each input sample Xt contains the current moment t and the discharge data of the battery in the previous two moments, t − 1 and t − 2. During the model-training process, historical data were used to predict the current SOC of the battery.

3.2. Time-Serialization Convolutional Neural Network

The TSCNN is a type of feed-forward neural network with input and output layers that handle various types of data. In a TSCNN, one- or two-dimensional (2D) time-signal data are often processed.
The TSCNN has multiple hidden layers, including convolutional, pooling, and fully connected layers. The convolutional layer can effectively extract feature information from the input data through convolutional operations, and the pooling layer can further reduce the dimensionality of the data to stabilize the network. Finally, the fully connected layer connects the output of the pooling layer with neurons to obtain output results such as labels.
Figure 7 illustrates the TSCNN structure, which includes multiple convolutional, pooling, and fully connected layers. This model can effectively handle different types of data and is widely used in tasks such as image analysis, speech recognition, and natural language processing.
In this study, the input shape of the TSCNN was x * 2, where x represents the time step. In the convolutional layer, the number, size, and features of the convolutional kernels were obtained through GA, and L2 regularization was used to limit the parameters. A rectified linear unit (ReLU) function was used as the activation function. The strides of both the convolutional layer and the pooling layer were set to one, as was the padding. A max-pooling operation of size two was used for the pooling layer. The fully connected layer used the tanh activation function with an additional dropout layer, pruning 30% of the neurons to prevent overfitting.
During model training, the Adam optimization algorithm and the MSE loss function were used to update the weights. The Adam algorithm is an adaptive learning rate optimization algorithm that converges faster. The MSE loss function is commonly used in regression problems to measure the gap between actual and predicted values.
In summary, the TSCNN structure used in this study is feasible and considers various factors to optimize model performance. Common techniques such as the ReLU activation function, Adam optimizer, MSE loss function, and dropout layer were applied to avoid overfitting.

3.3. Genetic Algorithm Time-Serialization Convolutional Neural Network (GATSCNN)

3.3.1. Genetic Algorithm

The GA is a type of swarm intelligence optimization algorithm that draws inspiration from natural selection and genetic mechanisms in biological evolution. By utilizing genetic operations such as selection, crossover, and mutation among individuals in a population, the GA gradually optimizes the fittest solutions to find the global optimum solution of the problem. Unlike other algorithms, GA does not require complete information about the global search space, only requiring the fitness measure of the models to select the elites. Constantly crossing and mutating these elites help optimize genes. GA displays strong parallelism and search optimization ability, making it suitable for solving complex large-scale optimization problems.
The main optimization techniques used for GA include selection, crossover, and mutation. Selection, also known as elimination, chooses individuals with high fitness for next-generation breeding, thereby retaining better solutions. Crossover involves combining two excellent individuals to produce new offspring. Mutations typically indicate random changes in individual genes to generate new variations.

3.3.2. Check Function

In GA, the use of binary encoding for the population may result in data anomalies after the crossover and mutation operations. In such cases, it is necessary to convert the anomalous data into the desired normal data.
Error 1: After the crossover or mutation operations in GA, the data become 0, as shown in Figure 8.
Error 2: According to binary encoding rules, if the parameter range to be optimized is within the range of 1–100, seven bits are required to represent the parameter. However, after the crossover or mutation operations, the optimized parameter may exceed the upper limit, as shown in Figure 9.
Obviously, for a neural network model, regardless of whether the optimized parameter represents the number of layers or number of neurons, a parameter of 0 is incorrect. If the optimized parameter exceeds the upper limit or falls below the lower limit, the result is undesirable. For example, if the optimized parameter represents the length of a time series and if the length is too long, the user may need to wait for a long time to obtain the result. Therefore, the parameters to be optimized must be set to within a reasonable range.
To address this problem, we propose a check function. The check function consists of three steps. Step 1: Convert the binary population into a decimal population. Step 2: Determine whether each optimized parameter is 0 or exceeds the limits. Step 3: Use the check function to convert anomalous data and replace original population data using the following equation:
Pk = Pk% (Max_K − Min_K) + Min_K
where Pk represents the data to be inspected, and Max_k and Min_K represent the upper and lower limits of Pk, respectively.

3.3.3. GA-Optimized TSCNN

To achieve the optimal TSCNN network model, GA was used for the global optimization of TSCNN to determine the parameters to be optimized, including the CNN layer number, fully connected layer number, batch size, times, number of features, convolution kernel size, and number of neurons. Figure 10 shows the GATSCNN flowchart.
Step 1: Initialize the population. First, the network structure is randomly selected and constructed based on the boundaries of the CNN layers and fully connected layer parameters. Subsequently, the hyperparameters are randomly defined according to the boundaries of the number of layers and other hyperparameters. Figure 11 shows the structure of an individual within a population. The left side of Figure 11 shows the binary encoding of the individual, and the right side shows the parameters. The header of the individual consists of CNN layers and fully connected layers, representing the neural network structure, followed by “batch size,” which represents the number of samples in a batch of the neural network. “Time serialization” and “feature” represent the length of the time series. “Feature,” “kernel,” and “neurons” represent the hyperparameters in the neural network model, as indicated by the feature size (number of nodes) of the convolutional neural network, the length of the convolutional kernel, and the number of neurons in the fully connected layers, respectively. Finally, the optimal parameters are obtained via GA optimization.
Step 2: Calculate the fitness function. First, the population is decoded to generate real-valued individuals that can be recognized by the network structure, and then the original data are divided into time-series data based on the individual’s time serialization. Subsequently, a neural network model is constructed based on the network structure and the hyperparameters of the individual, and the network model is fully trained for 200 epochs. Finally, the MSE of the model on the test set is used as a measure of fitness. MSE is calculated according to the following equation:
MSE = 1 m i = 1 m yi yi 2
where m represents the length of the data, yi denotes the true value of the data, and yi′ is the value predicted by the neural network model.
Step 3: Determine whether the expected result or the maximum number of iterations is reached. If the expected result is achieved, proceed to Step 7; otherwise, continue.
Step 4: Perform GA optimization. Based on the fitness obtained in Step 2, perform selection, crossover, and mutation operations on the population. The population does not need to be decoded at this stage and binary encoding can be used.
Step 5: Check the data. During GA optimization, it is possible to exceed the limit or obtain a binary matrix of all zeros, which can cause errors in the model beyond the expected results and reduce effectiveness. Therefore, a check function is designed to determine and transform the optimized parameters. First, the population optimized by GA is decoded into a real-numbered population, thereby verifying the real-numbered population based on the check function. If abnormal individual data are found, Algorithm 1 is used to transform the data. After completing the check, the population is encoded back into a binary population.
Step 6: Return to Step 2.
Algorithm 1 check function
Input: pop_GA = {pop1, pop2 , …, popn}
Output: output after examining population, pop
1: for n = 1 → N  do
2:   for k = 1 → K  do
3:      if popn[k] < min_k or popn[k] > max_k then
4:        popn[k] ⇐ (popn[k] % (max_k − min_k) + min_k)
5:      end if
6:      if popn[k] = = 0 then
7:        popn[k] ⇐ random(min_k, max_k)
8:      end if
9:   end for
10: end for
11: return pop
Step 7: Determine the optimal individual and decode the output.
Step 8: Generate a neural network model based on the decoded individual and estimate the battery SOC using the datasets corresponding to the two operating conditions.

4. Experimental Analysis

4.1. Results of the Unoptimized Neural Network Experiments

4.1.1. Experimental Analysis of Neural Networks Prior to Optimization

To train and test the accuracy of the neural network prediction, sample data were divided into training and validation sets. In total, 75% of the sample points were randomly selected from all the sample data as the training set and 25% as the validation set. This process was applied to both UDDS and DST discharge data.
This study compared the performances of the CNN, BP, RNN, and TSCNN neural network models in estimating battery SOC, thereby validating the results for low battery levels. The effectiveness of the network architecture and the impact of the network depth on the accuracy of SOC estimation were evaluated. Network depth is an important factor that affects the performance of neural network models. In the experiments, by setting different network layers to 2, 4, and 6, the influence of network depth on model performance was explored. The specific comparison results are referred to as error fluctuations in Figure 12. By comparing the performances of the TSCNN, CNN, BP, and RNN models and setting different network depths, the advantages and disadvantages of different neural network models can be better assessed in handling drastic changes in current data, thereby providing a reference for practical applications.
Based on the comparison of error fluctuations between the two datasets in Figure 12, it can be observed that different layer numbers correspond to different error fluctuations, indicating that the number of network layers had an impact on network performance. Therefore, it is necessary to adjust the number of neural network layers based on specific tasks and data characteristics to determine the optimal balance point and achieve a balance between learning ability, stability, and accuracy of the model.
According to the data in Figure 13 and Table 2, the TSCNN network exhibited the best performance on both DST and UDDS datasets. On the DST dataset, the TSCNN network had a max error of 6.26% and an MSE of 2.63. On the UDDS dataset, the TSCNN network had a max error of 4.39% and an MSE of 3.52.
In Figure 13, it can be observed that different networks had different max errors for different numbers of layer settings. For example, the optimal number of layers for the BP neural network was approximately four, whereas for other networks it was two. This is because different network structures, datasets, and specific tasks affect the choice of network layers; the layers cannot be simply determined by following general rules. Therefore, it can be concluded that, for the current experimental scenario and task, the TSCNN has a stronger feature extraction ability for time-series data and can more accurately estimate SOC. After considering the impact of the network layers, TSCNN achieved outstanding prediction results on different datasets.

4.1.2. Analysis of Low-Level Electric Power Experiment

The performance of different network architectures varied at low battery levels. The graph below compares the max errors of various networks when the battery SOC was above 20% (including 20%) and below 20%.
In Figure 14, the black area represents the error when the battery SOC was above 20%, and the red area represents the error when the battery SOC was below 20%. The error remained stable when the battery SOC was above 20%, but it increased significantly when the battery SOC was below 20%. Table 3 provides the error and the MSE for different networks, comparing a battery SOC of above 20% with one of below 20%.
In Figure 15 and Table 3, it can be observed that when the battery SOC was equal to or above 20%, both the max error and the MSE were relatively small, which is better than the corresponding values for below 20%. In the DST dataset, the maximum error of the BP neural network for a battery SOC of above 20% was 2% lower than the corresponding values for one of below 20%. The CNN, RNN, and TSCNN were lower by 0.49%, 1.67%, and 1.02%, respectively. In the UDDS dataset, the max errors for a battery SOC of above 20% were lower than those of below 20% by 1%, 0.66%, 2%, and 0.15%, respectively. Therefore, we conclude that the performance of the neural network deteriorates when the battery SOC is below 20%.

4.2. GA Experimental Verification

The experiment used the Think System SR650 rack server, an Intel(R) Xeon(R) Gold 5218R CPU @ 2.10 GHz with eight physical CPUs and 127.4 G memory. Each GA optimization experiment took approximately one week.

4.2.1. Genetic Optimization for Experimental Analysis

In Figure 16a and Table 4, in the BP neural network optimized by GA, the max error on DST and UDDS decreased by 2% and 1%, respectively. The MSE values also decreased by 2.29 and 1.7, respectively. This indicates that using the GA-optimized BP neural network significantly improves the accuracy of the SOC estimation.
In Figure 16b and Table 4, it can be observed that in the GA-optimized CNN network, the max error was reduced by about 2.3% on both datasets and the MSE decreased by 3.21 and 2.23, respectively. This indicates that using the GA-optimized CNN significantly improves the accuracy of the SOC estimation.
In Figure 16c and Table 4, it can be observed that in the GA-optimized RNN neural network the max error was reduced by 7.47% and 9.34% on DST and UDDS, respectively. The percentage of MSE decrease was also significant, at 19.64 and 19.07 percentage points, respectively. This indicates that GA optimization significantly improves the SOC estimation performance of the RNN neural network.
Finally, in Figure 16d and Table 4, in the GA-optimized TSCNN neural network, the max error was reduced by 1.71% and 1.81% on the DST and UDDS datasets, respectively. However, the decrease in MSE was relatively small, at 1.51 and 2.92, respectively. Meanwhile, on the prediction set, the max errors achieved after GA optimization were 4.45% and 2.58%, respectively, demonstrating excellent SOC estimation performance. This indicates that GA optimization significantly improves the accuracy of SOC estimation for the TSCNN neural network.
As shown in Figure 17, after using GA optimization, all of the networks achieved excellent SOC estimation performance. Among them, the network with the most significant improvement was RNN, followed by TSCNN. In particular, the GATSCNN network displayed the best performance on both datasets, achieving a max error optimization of 4.45% on DST and 2.58% on UDDS.
The error radar chart shows the error of each data point in all test rounds, providing an intuitive understanding of the SOC estimation performance for each model. In comparing the results with those before GA optimization, it can be observed that GA optimization improved the accuracy of SOC estimation for all networks, and the improvement effect of GATSCNN was the most significant. Therefore, using GA optimization for neural networks is highly effective in SOC estimation tasks.

4.2.2. GA Optimized for Low-Power Experimental Analysis

In Figure 18, error plots of battery SOC ≥ 20% and < 20% after GA optimization are shown. The red part represents battery SOC < 20% and the black part represents battery SOC ≥ 20%.
As demonstrated in Figure 19 and Table 5, after GA optimization, both the max error and the MSE were further reduced when the battery SOC was above or below 20%. In the DST dataset, the maximum error of the BP neural network for a battery SOC of below 20% was the same as that of above 20%. The max errors for CNN, RNN, and TSCNN were lower by 0.14%, 1.57%, and 0.17%, respectively. In the UDDS dataset, the maximum errors for a battery SOC of below 20% were lower by 1%, 0.02%, 0.41%, and 0.19%, respectively, compared to those of above 20%. Compared with before GA optimization, the deep learning neural networks optimized by GA not only reduced the overall error but also reduced the error gap between those of battery SOCs above and below 20%.
Compared with previous studies, the TSCNN model optimized by GA reduced the error when the battery SOC was less than 20% and narrowed the gap of errors for battery SOCs of greater than 20% and less than 20%. Therefore, using GA-optimized TSCNN not only solves the problem of poor estimation accuracy for a battery SOC of below 20% but also addresses the empirical errors caused by manually set parameters.

5. Conclusions

This study proposes the GATSCNN model to address the existing issues with current battery SOC estimation models, which include handling volatile data, uncertainty in neural network model parameters, lower precision caused by drastic fluctuations in experimental data for a battery SOC of below 20%, and decreased estimation accuracy caused by unusual data-processing methods after GA optimization.
The GATSCNN model first converts raw experimental data into time-series data, and then uses GA optimization to determine the optimal network structure for the TSCNN model. Subsequently, by checking the function transformation of GA-optimized data that are unusual, the optimal network structure is determined and the accuracy of battery SOC estimation is verified.
The experimental results indicate that the GATSCNN model has four advantages over existing BP, CNN, and RNN neural network models:
  • Improved estimation effect when the battery SOC is below 20%.
  • More accurate SOC estimation by the TSCNN model, and thus stronger feature extraction capabilities for time-series data.
  • Combining GA with TSCNN to improve the accuracy of battery SOC estimation without changing the characteristics of the TSCNN model, thereby performing even better when the battery SOC is below 20%. The GA optimizes the number of convolutional layers, the number of neurons, the convolutional kernel length, and the batch size of the TSCNN model. It also optimizes the time-series length to match the optimal TSCNN model parameters, overcoming the difficulties and errors caused by the artificial setting of the network structure and parameters and thus improving battery SOC estimation.
  • Introducing a check function that can transform unusual GA-optimized data into normal data to increase data correlation and robustness, accelerate the convergence speed of the population, and improve optimization efficiency.
In future work, an adaptive GA will be used to optimize the neural network structure and parameters, automatically adjusting the crossover and mutation probabilities according to the fitness function to find the optimal individual more quickly, thus improving the convergence speed and optimization ability of the GA. Second, the proposed method was applied to practical applications. The next step is to investigate how to integrate the method into a battery management system. Finally, additional optimization algorithms and deep learning neural networks should be explored for battery SOC estimation.

Author Contributions

Conceptualization, J.L. and X.W.; methodology, J.L. and X.W.; software, J.L. and X.W.; validation, X.W., J.L. and H.L.; formal analysis, J.L.; investigation, H.L.; resources, J.L.; data curation, X.W.; writing—original draft preparation, X.W. and H.L.; writing—review and editing, X.W. and J.L.; visualization, X.W.; supervision, J.L.; project administration, J.L.; funding acquisition, All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the RESEARCH PROJECT of HEBEI EDUCATION DEPARTMENT, grant number ZD2021334 and the KEY RESEARCH AND DEVELOPMENT PROGRAM OF HEBEI PROVINCE, grant number 20310101D and the S&T Program of Hebei, grant number 22375801D and the National Natural Science Foundation of CHINA, grant number 12072203.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, D.W.; Zhong, C.; Xu, P.J.; Tian, Y. Deep Learning in the State of Charge Estimation for Li-Ion Batteries of Electric Vehicles: A Review. Machines 2022, 10, 912. [Google Scholar] [CrossRef]
  2. Qays, M.O.; Buswig, Y.; Hossain, M.L.; Abu-Siada, A. Recent Progress and Future Trends on the State of Charge Estimation Methods to Improve Battery-storage Efficiency: A Review. CSEE J. Power Energy Syst. 2022, 8, 105–114. [Google Scholar]
  3. Wu, L.X.; Liu, K.; Pang, H.; Jin, J. Online SOC Estimation Based on Simplified Electrochemical Model for Lithium-Ion Batteries Considering Current Bias. Energies 2021, 14, 5265. [Google Scholar] [CrossRef]
  4. Ji, S.Y.; Sun, Y.; Chen, Z.X.; Liao, W. A Multi-Scale Time Method for the State of Charge and Parameter Estimation of Lithium-Ion Batteries Using MIUKF-EKF. Front. Energy Res. 2022, 10, 933240. [Google Scholar] [CrossRef]
  5. Zhou, N.; Liang, H.; Cui, J.; Chen, Z.; Fang, Z. A Fusion-Based Method of State-of-Charge Online Estimation for Lithium-Ion Batteries Under Low Capacity Conditions. Front. Energy Res. 2021, 9, 790295. [Google Scholar] [CrossRef]
  6. Tian, J.P.; Xiong, R.; Shen, W.X.; Lu, J. State-of-charge estimation of LiFePO4 batteries in electric vehicles: A deep-learning enabled approach. Appl. Energy 2021, 291, 116812. [Google Scholar] [CrossRef]
  7. Li, Y.F.; Xu, G.F.; Xu, B.Q.; Zhang, Y. A Novel Fusion Model for Battery Online State of Charge (SOC) Estimation. Int. J. Electrochem. Sci. 2021, 16, 4–15. [Google Scholar] [CrossRef]
  8. Xing, L.K.; Ling, L.Y.; Wu, X.Y. Lithium-ion battery state-of-charge estimation based on a dual extended Kalman filter and BPNN correction. Connect. Sci. 2022, 34, 2332–2363. [Google Scholar] [CrossRef]
  9. Chen, J.X.; Zhang, Y.; Li, W.J.; Cheng, W.; Zhu, Q. State of charge estimation for lithium-ion batteries using gated recurrent unit recurrent neural network and adaptive Kalman filter. J. Energy Storage 2022, 55, 105396. [Google Scholar] [CrossRef]
  10. Fan, T.E.; Liu, S.M.; Tang, X.; Qu, B. Simultaneously estimating two battery states by combining a long short-term memory network with an adaptive unscented Kalman filter. J. Energy Storage 2022, 50, 104553. [Google Scholar] [CrossRef]
  11. Shu, X.; Li, G.; Zhang, Y.J.; Shen, S.; Chen, Z.; Liu, Y. Stage of Charge Estimation of Lithium-Ion Battery Packs Based on Improved Cubature Kalman Filter with Long Short-Term Memory Model. IEEE Trans. Transp. Electrif. 2021, 7, 1271–1284. [Google Scholar] [CrossRef]
  12. Zhang, X.; Hou, J.W.; Wang, Z.K.; Jiang, Y. Study of SOC Estimation by the Ampere-Hour Integral Method with Capacity Correction Based on LSTM. Batter. -Basel 2022, 8, 170. [Google Scholar] [CrossRef]
  13. Zhang, X.; Hou, J.W.; Wang, Z.K.; Jiang, Y. Joint SOH-SOC Estimation Model for Lithium-Ion Batteries Based on GWO-BP Neural Network. Energies 2023, 16, 132. [Google Scholar] [CrossRef]
  14. Bhattacharjee, A.; Verma, A.; Mishra, S.; Saha, T.K. Estimating State of Charge for xEV Batteries Using 1D Convolutional Neural Networks and Transfer Learning. IEEE Trans. Veh. Technol. 2021, 70, 3123–3135. [Google Scholar] [CrossRef]
  15. Zou, R.M.; Duan, Y.X.; Wang, Y.; Pang, J.; Liu, F.; Sheikh, S.R. A novel convolutional informer network for deterministic and probabilistic state-of-charge estimation of lithium-ion batteries. J. Energy Storage 2023, 57, 106298. [Google Scholar] [CrossRef]
  16. Hannan, M.A.; How DN, T.; Lipu MS, H.; Ker, P.J.; Dong, Z.Y.; Mansur, M.; Blaabjerg, F. SOC Estimation of Li-ion Batteries with Learning Rate-Optimized Deep Fully Convolutional Network. IEEE Trans. Power Electron. 2021, 36, 7349–7353. [Google Scholar] [CrossRef]
  17. Herle, A.; Channegowda, J.; Prabhu, D. A Temporal Convolution Network Approach to State-of-Charge Estimation in Li-ion Batteries. In Proceedings of the 2020 IEEE 17th India Council International Conference (INDICON), New Delhi, India, 10–13 December 2020. [Google Scholar]
  18. Liu, Y.F.; Li, J.Q.; Zhang, G.; Hua, B.; Xiong, N. State of Charge Estimation of Lithium-Ion Batteries Based on Temporal Convolutional Network and Transfer Learning. IEEE Access 2021, 9, 34177–34187. [Google Scholar] [CrossRef]
  19. Li, J.H.; Jing, Z.Y.; Jiang, Y.Z.; Song, W.; Gu, J. The State of Charge Estimation of Lithium-Ion Battery Based on Battery Capacity. J. Electrochem. Soc. 2022, 169, 120539. [Google Scholar] [CrossRef]
  20. Cui, Z.H.; Kang, L.; Li, L.W.; Wang, L.; Wang, K. A hybrid neural network model with improved input for state of charge estimation of lithium-ion battery at low temperatures. Renew. Energy 2022, 198, 1328–1340. [Google Scholar] [CrossRef]
  21. Fan, X.Y.; Zhang, W.G.; Zhang, C.P.; Chen, A.; An, F. SOC estimation of Li-ion battery using convolutional neural network with U-Net architecture. Energy 2022, 256, 124612. [Google Scholar] [CrossRef]
  22. Tao, S.Y.; Jiang, B.; Wei, X.Z.; Dai, H. A Systematic and Comparative Study of Distinct Recurrent Neural Networks for Lithium-Ion Battery State-of-Charge Estimation in Electric Vehicles. Energies 2023, 16, 2008. [Google Scholar] [CrossRef]
  23. Wang, Q.; Ye, M.; Wei, M.; Lian, G.; Wu, C. Co-estimation of state of charge and capacity for lithium-ion battery based on recurrent neural network and support vector machine. Energy Rep. 2021, 7, 7323–7332. [Google Scholar] [CrossRef]
  24. Gong, L.; Zhang, Z.Y.; Li, Y.; Li, X.; Sun, K.; Tan, P. Voltage-stress-based state of charge estimation of pouch lithium-ion batteries using a long short-term memory network. J. Energy Storage 2022, 55, 105720. [Google Scholar] [CrossRef]
  25. Qian, C.; Xu, B.H.; Xia, Q.; Ren, Y.; Yang, D.; Wang, Z. A Dual-Input Neural Network for Online State-of-Charge Estimation of the Lithium-Ion Battery throughout Its Lifetime. Materials 2022, 15, 5933. [Google Scholar] [CrossRef] [PubMed]
  26. Li, Y.; Wang, S.L.; Chen, L.; Yu, P.; Chen, X. Research on state-of-charge Estimation of Lithium-ion Batteries Based on Improved Sparrow Search Algorithm-BP Neural Network. Int. J. Electrochem. Sci. 2022, 17, 220845. [Google Scholar] [CrossRef]
  27. Zhang, G.Y.; Xia, B.Z.; Wang, J.M. Intelligent state of charge estimation of lithium-ion batteries based on L-M optimized back-propagation neural network. J. Energy Storage 2021, 44, 103442. [Google Scholar] [CrossRef]
  28. Fang, C.; Jin, Z.Y.; Wu, J.; Liu, C. Estimation of Lithium-Ion Battery SOC Model Based on AGA-FOUKF Algorithm. Front. Energy Res. 2021, 9, 769818. [Google Scholar] [CrossRef]
  29. Ma, Q.Y.; Zou CA, Y.; Wang, S.L.; Qiu, J. The state of charge estimation of lithium-ions battery using combined multi-population genetic algorithm-BP and Kalman filter methods. Int. J. Electrochem. Sci. 2022, 17, 220214. [Google Scholar] [CrossRef]
  30. Chen, X.P.; Wang, S.L.; Xie, Y.X.; Fernandez, C.; Fan, Y. A novel Fireworks Factor and Improved Elite Strategy based on Back Propagation Neural Networks for state-of-charge estimation of lithium-ion batteries. Int. J. Electrochem. Sci. 2021, 16, 210948. [Google Scholar] [CrossRef]
  31. Liu, H.; Cao, X.Y.; Zhou, F.; Li, G. Online fusion estimation method for state of charge and state of health in lithium battery storage systems. AIP Adv. 2023, 13, 045217. [Google Scholar] [CrossRef]
  32. Chen, J.L.; Lu, C.L.; Chen, C.; Cheng, H.; Xuan, D. An Improved Gated Recurrent Unit Neural Network for State-of-Charge Estimation of Lithium-Ion Battery. Appl. Sci. 2022, 12, 2305. [Google Scholar] [CrossRef]
  33. Zhang, S.M.; Yang, L.; Zhao, X.W.; Qiang, J. A GA optimization for lithium-ion battery equalization based on SOC estimation by NN and FLC. Int. J. Electr. Power Energy Syst. 2015, 73, 318–328. [Google Scholar] [CrossRef]
Figure 1. Battery test system.
Figure 1. Battery test system.
Applsci 13 10910 g001
Figure 2. Voltage and SOC trend plots for the UDDS dataset.
Figure 2. Voltage and SOC trend plots for the UDDS dataset.
Applsci 13 10910 g002
Figure 3. Current and SOC trend plots for the UDDS dataset.
Figure 3. Current and SOC trend plots for the UDDS dataset.
Applsci 13 10910 g003
Figure 4. Voltage and SOC trend plots for the DST dataset.
Figure 4. Voltage and SOC trend plots for the DST dataset.
Applsci 13 10910 g004
Figure 5. Current and SOC trend plots for the DST dataset.
Figure 5. Current and SOC trend plots for the DST dataset.
Applsci 13 10910 g005
Figure 6. Graphical representation of the time-series data for model training.
Figure 6. Graphical representation of the time-series data for model training.
Applsci 13 10910 g006
Figure 7. TSCNN model structure diagram.
Figure 7. TSCNN model structure diagram.
Applsci 13 10910 g007
Figure 8. The 0 exception.
Figure 8. The 0 exception.
Applsci 13 10910 g008
Figure 9. The excess limit exception.
Figure 9. The excess limit exception.
Applsci 13 10910 g009
Figure 10. GATSCNN model flow chart.
Figure 10. GATSCNN model flow chart.
Applsci 13 10910 g010
Figure 11. Individual structure.
Figure 11. Individual structure.
Applsci 13 10910 g011
Figure 12. Error fluctuation charts for (a) DST and (b) UDDS datasets.
Figure 12. Error fluctuation charts for (a) DST and (b) UDDS datasets.
Applsci 13 10910 g012
Figure 13. Max error of the number of different layers of each neural network on the (a) DST and (b) UDDS datasets.
Figure 13. Max error of the number of different layers of each neural network on the (a) DST and (b) UDDS datasets.
Applsci 13 10910 g013
Figure 14. (a) Error of BP on DST; (b) error of CNN on DST; (c) error of RNN on DST; (d) error of TSCNN on DST; (e) error of BP on UDDS; (f) error of CNN on UDDS; (g) error of RNN on UDDS; (h) error of TSCNN on UDDS.
Figure 14. (a) Error of BP on DST; (b) error of CNN on DST; (c) error of RNN on DST; (d) error of TSCNN on DST; (e) error of BP on UDDS; (f) error of CNN on UDDS; (g) error of RNN on UDDS; (h) error of TSCNN on UDDS.
Applsci 13 10910 g014
Figure 15. Max error comparison for battery SOC ≥ 20% against SOC < 20% on (a) DST and (b) UDDS datasets.
Figure 15. Max error comparison for battery SOC ≥ 20% against SOC < 20% on (a) DST and (b) UDDS datasets.
Applsci 13 10910 g015
Figure 16. Comparison of errors before and after GA optimization for (a) BP, (b) CNN, (c) RNN, and (d) TSCNN.
Figure 16. Comparison of errors before and after GA optimization for (a) BP, (b) CNN, (c) RNN, and (d) TSCNN.
Applsci 13 10910 g016
Figure 17. Radar chart of (a) DST and (b) UDDS datasets before and after GA optimization.
Figure 17. Radar chart of (a) DST and (b) UDDS datasets before and after GA optimization.
Applsci 13 10910 g017
Figure 18. (a) Error of GABP on DST; (b) error of GACNN on DST; (c) error of GARNN on DST; (d) error of GATSCNN on DST; (e) error of GABP on UDDS; (f) error of GACNN on UDDS; (g) error of GARNN on UDDS; (h) error of GATSCNN on UDDS.
Figure 18. (a) Error of GABP on DST; (b) error of GACNN on DST; (c) error of GARNN on DST; (d) error of GATSCNN on DST; (e) error of GABP on UDDS; (f) error of GACNN on UDDS; (g) error of GARNN on UDDS; (h) error of GATSCNN on UDDS.
Applsci 13 10910 g018
Figure 19. Max error comparison for battery SOC ≥ 20% against SOC < 20% on (a) DST and (b) UDDS datasets.
Figure 19. Max error comparison for battery SOC ≥ 20% against SOC < 20% on (a) DST and (b) UDDS datasets.
Applsci 13 10910 g019
Table 1. Parameters of the type 18650 battery.
Table 1. Parameters of the type 18650 battery.
VersionNominal
Voltage
Calibration of ElectricityWorking
Voltage
Max Current
18,650 3.6 V1.5 Ah2.5–4.2 V15 A (20 °C)
Table 2. Test network effect verification.
Table 2. Test network effect verification.
ModelDSTUDDS
MSEMax ErrorMSEMax Error
BP5.9917%5.69%
CNN12.7114.1%9.259.6%
RNN20.9414.1%20.1212.48%
TSCNN2.636.26%3.524.39%
Table 3. Max error and MSE for SOC of different batteries on two datasets.
Table 3. Max error and MSE for SOC of different batteries on two datasets.
ModelDSTUDDS
≥20%<20%≥20%<20%
MSEMax
Error
MSEMax
Error
MSEMax
Error
MSEMax
Error
BP3.1015%3.1117%6.58%7.059%
CNN12.0213.65%14.4314.14%6.478.94%20.279.6%
RNN19.5512.47%24.4414.14%10.0910.48%59.7812.48%
TSCNN3.075.24%3.56.26%3.784.24%4.494.39%
Table 4. Comparison of network effects after GA optimization.
Table 4. Comparison of network effects after GA optimization.
ModelDSTUDDS
MSEMax ErrorMSEMax Error
GABP3.715%3.98%
GACNN9.511.8%7.027.2%
GARNN1.36.6%1.053.1%
GATSCNN1.124.45%0.62.58%
Table 5. Max error and MSE for SOC of different batteries on two datasets.
Table 5. Max error and MSE for SOC of different batteries on two datasets.
ModelDSTUDDS
≥20%<20%≥20%<20%
MSEMax
Error
MSEMax
Error
MSEMax
Error
MSEMax
Error
GABP4.5815%1.5815%4.448%2.237%
GACNN12.011.78%3.3411.64%7.587.23%4.837.21%
GARNN1.576.63%0.685.06%1.083.14%0.952.73%
GATSCNN1.294.55%0.674.38%0.682.58%0.312.39%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, X.; Lu, H.; Li, J. Influence of Time-Series Length and Hyperparameters on Temporal Convolutional Neural Network Training in Low-Power Battery SOC Estimation. Appl. Sci. 2023, 13, 10910. https://doi.org/10.3390/app131910910

AMA Style

Wang X, Lu H, Li J. Influence of Time-Series Length and Hyperparameters on Temporal Convolutional Neural Network Training in Low-Power Battery SOC Estimation. Applied Sciences. 2023; 13(19):10910. https://doi.org/10.3390/app131910910

Chicago/Turabian Style

Wang, Xiaoqiang, Haogeng Lu, and Jianhua Li. 2023. "Influence of Time-Series Length and Hyperparameters on Temporal Convolutional Neural Network Training in Low-Power Battery SOC Estimation" Applied Sciences 13, no. 19: 10910. https://doi.org/10.3390/app131910910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop