Model of a Composite Energy Storage System for Urban Rail Trains

Urban rail transit can solve the current inconvenient transportation problem for China’s large urban population. A compound onboard energy storage system can meet vehicles’ traction requirements and recover energy in vehicles’ braking stage to improve energy utilisation. However, the composite onboard energy storage system has several concerns, such as its power and energy demand, battery aging, and maintenance costs. Therefore, the NSGA-II algorithm is proposed to optimise matching the composite energy storage system parameters for urban rail trains. The NSGA-II algorithm was used with an improved elite retention strategy to optimise the parameters matching of the composite power supply. The optimisation’s objective concerns the replacement costs and economy of the composite power supply. The method increases the vertical diversity of searching and avoids genetic precocity. The NSGA-II algorithm calls the simulation model of composite power supply in real-time and simultaneously optimises the composite power supply and control parameters. The Pareto set of optimisation objectives and corresponding parameters and control strategies of composite power supply are obtained. The NSGA-II algorithm can optimise the composite energy storage system’s parameters and improve the train and the composite power supply’s performance indexes. The algorithm greatly reduces the composite power supply’s capacitance and reduces the system’s total energy consumption. Then, the multi-component energy loss caused by the multi-power source system can be effectively controlled. The total capacitance is reduced by 12.1%, the battery life is prolonged by 18.86%, and the optimised composite power supply’s energy storage is increased by 17.6%.


Introduction
The goal of optimising the urban rail composite energy storage system is to reduce fuel consumption and emissions to meet the dynamic performance and performance constraints of its components. These characteristics are related to the parameters of the components of the power system. They are also affected by the control strategy's parameters. This is a nonlinear and complex coupling relationship between the main components of the system. Therefore, there are still phases between the objective functions [1][2][3]. This paper addresses dealing with the multi-objective conflict problem and achieving the best overall performance.
The main methods of solving complex constrained nonlinear multi-objective optimisation problems can be divided into two categories: gradient-based and non-gradient-based. Gradient-based optimisation methods, such as sequential quadratic programming (SQP), assume that the objective function is continuous and differentiates and satisfies the Lipchitz condition [2][3][4][5][6]. The composite energy storage system is a complex nonlinear system. Its genetic algorithm has been widely used in multi-objective optimisation. It has also been applied in the parameter optimisation of hybrid electric vehicle components. However, it is rarely used in urban rail trains, which is worth studying [7][8][9][10].
However, current research has mainly solved this multi-objective problem by setting weights for different objective functions. This method can simplify the problem to some extent. However, it is challenging to determine the weights that are in line with the actual situation. The applied multi-objective genetic algorithm uses the non-dominant sorting method to deal with multiple objective functions, which can reduce oil consumption. The Pareto optimal solution set of this kind of integrated optimisation problem can be obtained by optimising the control strategy parameters with both consumption and emission as the optimisation objectives. This solution can provide various options for setting control strategy parameters [11][12][13][14][15].

Multi Objective Optimisation
There are often design and decision-making problems in engineering under multiple criteria or multiple design objectives [16]. If these objectives conflict, it is necessary to find the best design scheme to meet these objectives. The general expression of the multi-objective optimisation problem is as follows: Among them, f ðxÞ is a multi-objective function, whereas g i ðxÞ ! 0, h j ðxÞ ¼ 0 is a set of constraints. The common method to solve the multi-objective optimisation problem is to transform the multi-objective into a single objective for optimisation. Common methods are the objective weighting method, hierarchical optimisation method, e-constraint method, global quasi measurement method and objective planning method. There are often conflicts in the optimisation problem among the sub-objectives. Improving one sub-objective performance index will degrade the other sub-objective performance. Therefore, there is usually no optimal solution for all the objective functions simultaneously. French economist Pareto [17] first studied the multi-objective optimisation problem in economics and developed the Pareto optimality Theory. The multi-objective optimisation problem needs to optimise a set of functions whose solution is a set of points [18][19][20], called the Pareto optimal set. The Pareto optimal solution definition is as follows: To minimise multi-objective problems, n objective components f k ðk ¼ 1; …; nÞ can make of the vector f ðxÞ ¼ ðf 1 ðxÞ; f 2 ðxÞ; …; f n ðxÞÞ f (x) = (F1 (x), F2 (x), … FN (x)), where x u 2 U is the decision variable, and if x u is the Pareto optimal solution. It is satisfied if there is no variable In other words, there is no x v 2 U, which makes the following formula hold: In general, a multi-objective optimisation problem has multiple Pareto optimal solutions. The set of these solutions becomes the Pareto optimal solution set. The key to the multi-objective optimisation problem is to solve the Pareto optimal solution set. Fig. 1 shows a double objective unconstrained minimisation problem. The shadow part is the solution space. Notably, "R" is not the optimal solution, and point "C" is smaller than "R" in both objectives. Not all the solutions on the boundary "ABC" are optimal in the shadow. Still, all the solutions on the curve "ABC" are optimal, Pareto solutions.

Fast Non-Dominated Sorting Method
The non-dominated sorting method is used in the NSGA algorithm. The computational complexity of this method is O(mN 3 ). Meanwhile, the computational complexity of the fast non-dominated sorting method is only O(mN 3 ) in the NSGA-II algorithm. The following briefly explains the origin of the computational complexity of the two:

Computational Complexity of Non-Dominated Sorting Algorithm
Each individual must be compared with other individuals in the population to rank the population with m number and N size and determine whether the individual is dominated. In this way, each individual needs a total of mN times of comparison and O(mN ) calculation complexity. When this step is completed, continue to find all the individuals on the first non-dominated layer, which needs to traverse the entire population and have a total computational complexity of O(mN 2 ). In this step, all the individuals in the first nondominated layer are found. The previous operation is repeated to find the individuals in the subsequent levels. In the worst case (one individual at each level), all individuals of the whole population are graded. The computational complexity of this algorithm is O(mN 3 ).

Computational Complexity of Fast Non-Dominated Sorting Algorithm
For each individual i in the population, there are two parameters: n i and S i . n i is the number of individuals who dominate the individual solution and S i is the set of individuals who are dominated by the individual solution.
(1) find all the individuals n i ¼ 0 in the population and store them in the current non-dominated set Z 1 ; (2) for each individual j in the current non-dominated set Z 1 , traverse the set S j of individuals it controls. Following this, subtract 1 from each individual in the set S j . In other words, the solution's number of individuals of the dominated individual t is n t reduced by 1 since the individual j of the (3) as a set Z 1 of first-order non-dominated individuals, the solution for the individuals is optimal. It only dominates the individual, but not any other individual. It assigns the same non-dominating order to all the individuals in the set rank i . Then, it continues to perform the above classification operation on the set Z 1 . It also assigns the corresponding non-dominating order until all the individuals are classified. In other words, all individuals are assigned the corresponding non-dominating order.
Each iterative of operations (1) and (2) of the above fast non-dominated sorting algorithm steps requires N calculations. Therefore, the computational complexity of the whole iterative process is the largest. Therefore, the computational complexity of the whole fast non-dominated sorting algorithm is as follows: max mN 2 ; N 2 ð Þ¼mN 2 :

Congestion Determination
In the original NSGA algorithm, the shared niche technology is used to ensure the population's diversity population [21,22]. However, the decision-maker needs to specify the value of the shared parameter share r . The crowding degree concept is quoted in NSGA-II to overcome this deficiency in the NSGA algorithm. The crowding degree refers to the density of individuals around a given point in the population, expressed by i d , intuitively expressed by the length of the largest rectangle containing individuals but excluding other individuals around the individuals, as shown in Fig. 2.
In the non-dominated sorting genetic algorithm (NSGA-II) with an elite strategy, the crowding degree's calculation is an important factor to ensure the population's diversity. The calculation steps are as follows: (1) the crowding degree i d of each point is set to O; (2) the population is non-dominated for each optimisation objective, and the crowding degree of two individuals on the boundary is infinite o d ¼ l d ¼ 1; (3) calculate the crowding degree of other individuals in the population: In the above formula, i d represents the crowding degree of point i, f iþ1 j is the function value of the j objective function of point i+1, and f iÀ1 j is the function value of the j objective function of point i-1.

Elite Strategy
The NSGA-II algorithm introduces the elite strategy to prevent losing excellent individuals in the process of population evolution. By mixing the parent population and the offspring population, nondominated sorting can better avoid the loss of excellent individuals in the parent population. The execution steps of the elite strategy are shown in Fig. 3: First, the offspring population Q t of the t generation and the parent population P t should be combined to form a new species group R t with the population size of 2N. Then, population R t is arranged in the nondominated order, a series of non-dominated sets Z i are obtained, and the crowding degree of each individual is calculated. Because the individuals of both parents and children are included in population R t , the individuals included in the non-dominated set Z 1 after non-dominated sorting are the best set of individuals Z i in the whole population Z 1 . Therefore, the population will be put into the new parent population P tþ1 . If the size of population P tþ1 is smaller than N at this time, it is necessary to continue to add the next level of the non-dominated set Z 2 to population P tþ1 until the size of the population exceeds that of the non-dominated set Z n . Following this, the crowding degree comparison operator is used for each individual in the population. The first individual is taken num Z n ð Þ À num P tþ1 ð ÞÀN ð Þ f g to make the population reach the size. Then, the new offspring population Q tþ1 is generated through genetic operators, such as selection, crossover and mutation.
In the NSGA-II algorithm, a congestion comparison operator is introduced to ensure the diversity of non-inferior solutions. The crowding degrees of all individuals in the population are compared. Therefore, there is no dependence on the shared parameters share r in the NSGA algorithm in this process.

Simulation Results
In this paper, the experimental environment parameters are as follows: the population size is 100, the default evolution algebra is 100, the mating pool size is 50, the crossover probability is 0.9, the mutation probability is 0.1, the probability level is 0, the weight coefficient is 1, the regularisation factors are 1.05 and 1, the uncertainty level of determined parameters is, and the penalty factor is 1000. The calculation results in this paper are as follows in the Figs. 4-6.
(1) The influence of constraint probability level λ Generally, the interval uncertainty constraints can be satisfied at a certain level of possibility in interval number optimisation. Then, uncertainty constraints can be transformed into deterministic constraints. Therefore, selecting the probability level affects the distribution of non-inferior solutions directly and plays a key role in the final optimisation results. In this section, we will discuss the Pareto frontier under different probability values. We will further explain the probability value's influence on the final optimisation results by analysing the final MATLAB simulation results. The following is the Pareto front in the case of the weight coefficient β = 1. The probability level values are λ = 0, λ = 0.5, λ = 0.7, λ = 1.0, respectively which is shown in the Fig. 5.
(2) The selection of the multi-objective weight coefficient β This section will discuss the optimisation results of the same objective function under different values of the multi-objective weight coefficient β. Following this, it will determine the influence of the multi-objective weight coefficient β on the final optimisation results. The following is a list of the final optimisation results when the multi-objective weight coefficient β = 0, β = 0.5, β = 0.7 and β = 1.0 at the possibility level λ = 0 in the Fig. 6.

Conclusions
The NSGA-II algorithm is used to solve the multi-objective joint optimisation problem of the composite energy storage system. The algorithm aims to improve vehicles' economy and the replacement costs of the composite energy storage system. The algorithm can call the working condition model of the composite energy storage system in real-time and read the simulation results for further processing. According to the Pareto set of the two objective values obtained by the algorithm, the initial cost, average daily cost and battery life mileage of each solution in the Pareto set are further compared and optimised. The final solution is obtained. The final matching parameters significantly reduce the composite energy storage system's replacement costs without reducing the whole vehicle's economy. At the same time, the initial cost and other indicators also achieve satisfactory results.