It is critical to have precise data about Lithium-ion batteries, such as the State-of-Charge (SoC), to maintain a safe and consistent functioning of battery packs in energy storage systems of electric vehicles. Numerous strategies for estimating battery SoC, such as by including the coulomb counting and Kalman filter, have been established. As a result of the differences in parameter values between each cell, when these methods are applied to high-capacity battery packs, it has difficulties sustaining the prediction accuracy of overall cells. As a result of aging, the variation in the parameters of each cell is higher as more time is spent in operation. It is suggested in this study to establish an SoC estimate model for a Lithium-ion battery by employing an enhanced Deep Neural Network (DNN) approach. This is because the proposed DNN has a substantial hidden layer, which can accurately predict the SoC of an unknown driving cycle during training, making it ideal for SoC estimation. To evaluate the nonlinearities between voltage and current at various SoCs and temperatures, the proposed DNN is applied. Using current and voltage data measured at various temperatures throughout discharge/charge cycles is necessary for training and testing purposes. When the method has been thoroughly trained with the data collected, it is used for additional cells cycle tests to predict their SoC. The simulation has been conducted for two different Li-ion battery datasets. According to the experimental data, the suggested DNN-based SoC estimate approach produces a low mean absolute error and root-mean-square-error values, say less than 5% errors.
As a result of the widespread fossil fuel consumption, environmental degradation and energy constraints are becoming progressively major problems worldwide. Authorities are devoting an extensive interest to the growth and commercialization of renewable energy sources. Electric vehicles (EVs) have gained a considerable amount of attention in recent decades, both from the government and from industries. Many types of EVs have appeared in recent years on the market, including pure EVs, hybrid EVs, fuel-cell EVs, and others [
Coulomb Counting (CC), Kalman Filter (KF), look-up table, Extended Kalman Filter (EKF), State Observer (SO), and Particle Filter (PF) are some of the SoC estimate methods available today. When it comes to determining SoC, one of the most prevalent approaches is the CC method, which estimates SoC by adding up all of the currents over a while. However, because of the flaws in the data, it is difficult to make an accurate SoC estimation, especially because the inaccuracy is compounding with time [
On the other hand, an effort has been made to predict the SoC using the data-driven estimation technique, proven to be successful. In most cases, classic Machine Learning (ML) techniques such as support vector machines, fuzzy logic controllers, Artificial Neural Networks (ANN), and other permutations of the methods were used. Generally, traditional ML algorithms employ no more than two layers of computing layers in their implementation. This is particularly important in the case of the ANN. Although significant advancements have been made in recent years, the traditional ANN has now been elevated to the status of being advanced [
Deep Neural Network (DNNs) is an upgraded form of the ANN, which is simply an ANN with ‘deeper’ functional layers. Combine DNN’s performance with intelligent training schemes and modifications, and it becomes the best in fields such as natural language processing, speech recognition, and computer vision. The concept of utilizing DNN to estimate SoC, on the other hand, is quite new. Scientists have effectively utilized two dissimilar forms of DNN to forecast the SoC of the Li-ion battery pack. The authors of [
The remaining sections of the paper are organized as follows. The DNN concept is discussed briefly in Section 2. The proposed DNN structure for SoC estimation is discussed in detail in Section 3. Section 4 describes the experimental setup and the dataset preparation used in this investigation. Section 4 also summarizes the findings. Lastly, Section 5 concludes the paper.
This section presents the architecture of the proposed enhanced Deep Neural Network (DNN). In addition, the hyper-parameters of the DNN model are also discussed.
The FNN and the RNN structures are the two most frequently utilized architectures in ML [
DNN is a type of FNN architecture that belongs to the class of DNN. DNN is composed of at least three computational layers, which are as follows: the input layer, hidden layers, and output layer. Because the deep neural network has been widely employed in the literature on a wide range of time series prediction and classification models, this study evaluates the modeling capabilities of the deep neural network on the problem of estimating the SoC of the lithium-ion battery. It is utilized in this configuration to plot the battery variables, such as instantaneous current and voltage, temperature, and average current and voltage to the battery SoC. According to mathematics, the vector of outputs and inputs is specified as
The activation function of the perceptron
In this case, the learning variables are the bias and weights settings, which are calculated during the training phase. The structure can also be designed with different outputs and with varied numbers of neurons in each layer by making minor adjustments.
The majority of ML algorithms provide a variety of hyperparameters that can be used to influence the algorithm’s performance and behavior. When working with the DNN, there seems to be a set of hyperparameters that must be selected in advance of use. Among the most significant hyperparameters to consider are those listed in
Hyperparameters | Values |
---|---|
Initial learning rate | 0.01 |
Number of hidden layers | 3 |
Number of hidden neurons | 40 per layer |
Learning rate drop factor | 0.1 |
Learning rate drop rate | 500 |
Number of epochs | 2100 |
Loss function optimizer | Stochastic Gradient-Descent with Momentum |
Number of runs | 5 |
Nonlinearity | Sigmoid and SELU for the hidden layers |
This segment delves into the numerous training approaches used in the experiment to train the suggested deep neural network framework, including the training and optimization methodology and the loss functions and error indicators. The optimization algorithm performs a significant part of the quest for a convergence point throughout the training of a proposed deep neural network. It is possible to achieve greater DNN effectiveness by employing the appropriate optimization technique. Typically, Stochastic Gradient Descent (SGD) optimization has been employed to train deep neural networks. Nevertheless, numerous new advancements recommend that a faster optimization technique, such as the stochastic gradient descent with momentum, root mean square propagation, Adam optimizer, Nesterov Adam optimizer, are discussed in various literature to improve the solution accuracy. This paper employs the SGD with a momentum optimization algorithm to optimize the DNN framework. In order to train the proposed DNN model, the SGD with momentum optimization technique is employed with a batch size of 256. The batch size is defined by the level of hardware memory that is presently accessible. Iterative adjustment of the learnable parameters, such as weight and bias values, is performed for supervised learning algorithms to reduce the error between the experimental and estimated values. The estimated value is signified by the vector
It is possible to backpropagate errors by computing the partial derivative of the losses with respect to each learnable variable, which is then employed to modify the values of the hyperparameters. Because the procedure is iterative, the procedure is often performed numerous times until the training termination criteria have been met. Iterations are referred to as epochs in this paper, and the termination criteria that are applied are dependent on the validation tolerance. The learning rate, which has an initial value of 0.01, is multiplied by the learning rate drop factor, which is 0.1, every 500 epochs until about the completion of the training phase. The validation patience parameter, which has a value of 2100 epochs, specifies how several epochs of training would be conducted before a decrease in error is achieved. The training process for most network architectures is repeated several times (five times in this paper) with various initial model parameters to guarantee that a solution near the global optimum is obtained rather than a solution near the optimal local solution, which would be undesirable. The whole process of the proposed model to estimate the SoC of Li-ion battery is illustrated in
Apart from RMSE, to assess the effectiveness of the DNN models, a variety of other error metrics are also employed. Mean Percent Absolute Error (MPAE), Mean Absolute Error (MAE), and Mean Squared Error (MSE) are a few other error metrics. In this paper, RMSE and MAE are evaluated. The formulas to find all other metrics are given below.
The study was performed in an 8 cubic feet thermal laboratory with a 75 A, 5 V Digatron firing circuits universal battery tester channel. The voltage and current precision were within 0.1% of the full scale for the 3 Ah LG HG2 cell under test. A variety of experiments were carried out at four different temperatures, with the battery being charged after each test at a rate of one cycle per second to 4.2 V with a 50 mA cut-off when the battery temperature reached 22°C or above. In this paper, the dataset at ambient temperatures of 25°C, 10°C, 0°C, and −10°C is considered. The driving cycle output profiles are performed until the cell has been discharged to 95% of its 1C discharge capacity at the appropriate temperature. From [
In order to find the optimal number of hidden layers or to observe the impact of the depth of DNN, the training was conducted using different layers, such as 2, 3, 4, 5, 6, and 7, with 40 neurons per layer. For this investigation, only the HWFET drive cycle with 25°C was considered. The error metrics and
Number of layers | RMSE in % | MAE in % | R2 | Rank |
---|---|---|---|---|
2 | 4.215 | 0.3448 | 0.91047 | 6 |
4 | 3.856 | 0.2456 | 0.94112 | 2 |
5 | 4.001 | 0.2948 | 0.93694 | 4 |
6 | 3.987 | 0.2911 | 0.93991 | 3 |
7 | 4.168 | 0.3178 | 0.91456 | 5 |
It is investigated in this subsection how well DNN performs on the test data, which comprises the UDDS, LA92, and US06 driving cycle data sets. DNN demonstrated excellent performance on the test set, as shown in
Temperature | Number of layers | RMSE in % | MAE in % | R2 | Rank |
---|---|---|---|---|---|
25°C | 2 | 2.0145 | 1.9792 | 0.8098 | 6 |
4 | 1.4998 | 1.1423 | 0.8678 | 3 | |
5 | 1.5014 | 1.1514 | 0.8789 | 4 | |
6 | 1.4902 | 1.0125 | 0.8882 | 2 | |
7 | 1.5105 | 1.1745 | 0.8469 | 5 | |
10°C | 2 | 2.0125 | 1.6989 | 0.8145 | 6 |
4 | 1.9015 | 1.4025 | 0.8695 | 3 | |
5 | 1.9112 | 1.4269 | 0.8619 | 4 | |
6 | 1.9005 | 1.3354 | 0.8814 | 2 | |
7 | 1.9125 | 1.4458 | 0.8569 | 5 | |
0°C | 2 | 2.0045 | 1.6498 | 0.8401 | 6 |
4 | 1.8745 | 1.3598 | 0.8685 | 4 | |
5 | 1.8365 | 1.3659 | 0.8711 | 3 | |
6 | 1.7985 | 1.2914 | 0.8993 | 2 | |
7 | 1.8898 | 1.3815 | 0.8815 | 5 | |
−10°C | 2 | 3.1454 | 2.1315 | 0.8145 | 6 |
4 | 2.9874 | 1.8789 | 0.8478 | 4 | |
5 | 2.9482 | 1.8614 | 0.8425 | 3 | |
6 | 2.6987 | 1.7569 | 0.8514 | 2 | |
7 | 2.9978 | 1.8898 | 0.8368 | 5 |
As discussed earlier, the proposed DNN model is executed 5 times individually to observe the best solutions. In order to visualize all the statistical measures, such as Maximum (MAX), RMSE, and MAE during all five different runs, the bar charts for all temperature conditions are presented in
This paper, related to prior efforts in [
The estimated and measured SoC, as well as the SoC estimation error for the LG HG2 battery under 25°C, are displayed in the time domain graph in
In this research, a new DNN-based SoC prediction methodology for Li-ion batteries is developed, and the effectiveness of the methodology is demonstrated through simulation results. The SoC estimation of the LG HG2 battery under different temperature conditions has been effectively attained with less than 5% error. This work demonstrated that the 3-layer DNN model used in the HWFET drive cycle could predict the SoC of UDDS, LA92, and US06 drive cycles with a good degree of accuracy, as demonstrated in the experimental section. This paper supports the assumption that a deep neural network with appropriate hidden layers can generalize the forecast tendency of SoC from one driving cycle to another, as demonstrated. Similarly, it was proved that extending the depth of the DNN (from 2 to 7 layers) results in a reduction in error metrics. Therefore, the proposed DNN model appears to be quite promising, and its use in EV SoC estimate scenarios should be thoroughly examined when applying machine learning techniques. This study makes an important contribution by advancing the development of Li-ion batteries SoC estimation through the DNN algorithm, which results in enhanced SoC estimate performance and reduced error rates under various EV drive cycle trials.
The investigators anticipate that the contextual information provided in the input data would play a significant role in improving the accuracy of the SoC estimate in future studies. It is possible to train advanced deep learning models such as the deep convolutional NNs and long short-term memory to take advantage of time-related data to improve accuracy. Also essential is the consideration of the robustness of SoC prediction models to inaccuracies in current, voltage, and temperature sensor data, as well as the development of new methods for more accurate SoC estimation as the battery ages. It can also be applied to real-time applications, such as time series prediction, load forecasting, etc.
The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University (KKU) for funding this research project Number (R.G.P.2/133/43).