IRKO: An Improved Runge-Kutta Optimization Algorithm for Global Optimization Problems

: Optimization is a key technique for maximizing or minimizing functions and achieving optimal cost, gains, energy, mass, and so on. In order to solve optimization problems, metaheuristic algorithms are essential. Most of these techniques are influenced by collective knowledge and natural foraging. There is no such thing as the best or worst algorithm; instead, there are more effective algorithms for certain problems. Therefore, in this paper, a new improved variant of a recently proposed metaphorless Runge-Kutta Optimization (RKO) algorithm, called Improved Runge-Kutta Optimization (IRKO) algorithm, is suggested for solving optimization problems. The IRKO is formulated using the basic RKO and local escaping operator to enhance the diversification and intensification capability of the basic RKO version. The performance of the proposed IRKO algorithm is validated on 23 standard benchmark functions and three engineering constrained optimization problems. The outcomes of IRKO are compared with seven state-of-the-art algorithms, including the basic RKO algorithm. Compared to other algorithms, the recommended IRKO algorithm is superior in discovering the optimal results for all selected optimization problems. The runtime of IRKO is less than 0.5 s for most of the 23 benchmark problems and stands first for most of the selected problems, including real-world optimization problems.


Introduction
The term "optimization" relates to determining the best values of different system components to complete the systems engineering at the lowest possible price. In machine learning and artificial intelligence, practical applications and problems are typically constrained, unconstrained, or discrete [1,2]. As a result, finding the best solutions using standard numerical programming approaches is difficult. Numerous optimization methods have been established in recent years to enhance the performance of various systems and reduce computing costs. Conventional optimization techniques have several flaws and restrictions, such as convergence to a local minimum and an undefined search space. Many novel optimization approaches have been presented in recent years to address such flaws. Most systems are nonlinear and require advanced optimization techniques to be evaluated. Mathematical optimization techniques are often inspired by findings in nature. Such algorithms produce random solutions inside the search space and keep making improvements until these results converge to probable global optimums. Every metaheuristic algorithm does have its settings that influence the optimization process [3][4][5][6]. In general, each metaheuristic algorithms share common characteristics, such as the search strategy, which itself is divided into two sections, one of which is known as diversification and the latter of which has been known as intensification. The first phase of the algorithm creates random variables in the initial phase to explore potential search space locations. The optimization approach attempts to discover the best result from the search space in the second phase. An excellent optimization algorithm must balance the exploitation and exploration phase to prevent entrapment at the local minima. The highest complications of such algorithms are in having an exact balance between exploitation and exploration phases to avoid trap from local minima and still enhance the solution accuracy previously obtained [7][8][9]. As per the discussion by [10], there are no bad or good among the various optimization techniques, but one more suitable for a certain problem, that is, still has a gap for developing new optimization techniques.
The authors want to provide a more efficient and successful method, and hence this paper proposes an Improved Runge-Kutta Optimization (IRKO) algorithm, a revolutionary metaphorfree optimization method. The suggested IRKO is formulated from the basic version of the RKO algorithm [11], and it employs a particular slope computation ideology based on the Runge-Kutta technique as a search engine for optimization problems. To improve the solution quality of the IRKO, the Local Escaping Operator (LEO) [12] is integrated with the RKO. To thoroughly demonstrate the robustness and efficacy of the IRKO, a set of twenty-three classical benchmarks and three constrained engineering problems is employed.
The paper has been organized as follows: Section 2 confers the backgrounds of metaheuristic algorithms related works, and Section 3 discusses the formulation of the IRKO algorithm employing RKO and LEO concepts. Section 4 demonstrates the numerical simulation results on classical benchmark functions and real-world problems. Section 5 concludes the paper.
to enhance and address various applications, such as global optimization, feature selection, image segmentation, and so on, which are only a few examples. The multi-verse optimizer is based on physics' multi-verses hypothesis. It has handled various problems, including global optimization, forecasting, power systems, feature selection, etc. Other physics-based algorithms, such as the Sine-Cosine Algorithm (SCA) [37], Ion-Motion Optimizer (IMO) [38], and equilibrium optimizer [23], are few best algorithms for solving optimization problems.
In general, population-based optimization algorithms begin procedures by randomly selecting candidate solutions. Such solutions are improved significantly by the algorithms and iteratively assessed against a specified fitness function, which is the basis of any algorithms. Due to the stochastic nature, obtaining an optimal or near-optimal solution in a single run is not sure. However, many random solutions and evolutionary rounds increase the possibility of discovering the global optima for the given problem. Regardless of the differences in algorithms used in metaheuristic approaches, the optimization process may be separated into two distinct phases: exploration and exploitation. This refers to the broad scope of searching by employing several solutions provided by the algorithms to circumvent search difficulties.

Improved Runge-Kutta Optimization (IRKO) Algorithm
Firstly, this section discusses the concepts of the RKO algorithm and LEO and then extends the discussion to the development of the IRKO algorithm using RKO and LEO.

Runge-Kutta Optimization (RKO) Algorithm
The RKO algorithm is characterized by the absence of metaphors and strict attention to the underlying mathematical structures [11]. It is inappropriate to describe the population-based technique's framework using metaphors, as doing so conceals the basis of the mathematics that power the optimizers. Thus, RKO was established considering the logic of the Runge-Kutta (RK) approach and the growth of a population of individuals. The RK employs a special formulation to handle ordinary differential equations. The fundamental premise of RKO is based on the RK method's suggested estimated gradient. The RKO employs that as a method of exploring the search space to form a set of rules. The initialization phase is the first step in algorithms, and this phase of the RKO is similar to other population-based algorithms. This section discusses only the main mathematical formulation RKO algorithm.

Search Mechanism
The solution space is divided into regions, with random solutions placed in each region. A set of search solutions are then placed into the various regions, and a balance between exploitation and exploration is established. To identify the search strategy in the RKO algorithm, the RK4 approach was applied. Thus, the search mechanism (SM) in RKO is represented in Eq. (1).
where k 1 , k 2 , k 3 , k 4 are coefficients of the first-order derivative by the RK and x is position increment.

Solution Update
The RKO algorithm starts the procedure with a random population (solutions). Every time around, the solutions' positions are updated by the RK method. RKO employs a solution and the search mechanism generated using the Runge-Kutta approach. To offer exploration and exploitation search, Eq. (3) is utilized.
where a and b are constant numbers, IT denotes current iteration, and MaxIT denotes the maximum number of iterations. The expressions for x c and x m are presented in Eq. (6).
where ϕ denotes a random number between [0, 1], x best denotes the best solution, and x lbest denotes the current best solution.

Enhanced Solution Quality (ESQ)
The RKO employs ESQ to maximize the quality of the solution across iterations while avoiding local optima. Here's how the solution (x new2 ) is created utilizing the ESQ.
w = rand(0, 2).exp −c IT MaxIT (8) x avg = x r1 + x r2 + x r3 3 (9) (10) where β signifies a random number between [0, 1], c signifies a random number equal to 5×rand, r denotes an integer number equal to 1, 0, or −1. The solution (x new2 ) may not have improved solution than the current solution (i.e., f (x new2 ) > f (x n )). To produce the best fitness again, alternative new solution (x new3 ) is produced, which is represented as follows: where v denotes a random number equal to 2 × rand. Eq. (11) is about moving the x new2 toward a better location. The search around x new2 begins in the first criterion, and then RKO seeks to find potential directions to progress towards the right decision. The pseudocode of RKO is shown in Algorithm 1. For comprehensive details of RKO, the readers are encouraged to read the base paper.

Local Escaping Operator
The authors of [3,6] presented an operator called the LEO, which is employed to boost the capabilities of a gradient-based optimizer. The LEO's updates ensure that the solutions are of superior quality. It can bypass local minima traps, and hence, it enhances convergence. LEO produces its x m r1 and x m r2 viable alternatives, high-performing by implementing various solutions such as the best location x best , two randomly produced solutions, and a new randomly generated solution x m k . Therefore, X m LEO can be calculated using Eq. (12).
where b is a normal distributed random number, pr represents the probability rate, and a represents a uniform random number in the range of [−1, 1]. The values of random numbers a 1 , a 2 , and a 3 are obtained using Eq. (13).
The Z 1 is equal to 1 or 0. The expression for x m k is given in Eq. (14).
where x rand represents the updated solution, u 2 represents a random number between [0, 1], and x m p represents the random population solution. Therefore, Eq. (14) is altered, as shown in Eq. (15).
The value of Z 2 is 1 or 0 based on the u 2 . The readers are encouraged to read the base paper [6] for better understanding.

Development of Improved Runge-Kutta Optimization Algorithm
RKO algorithm struggles from being "trapped in the local minima situation." Optimization could not be completed since the local region confined the system. This scenario often occurs in difficult and high-dimensional optimization problems. Additionally, producing new solutions is based on the results of the last iteration. This could slow down the algorithm's convergence speed, and hence, cause early convergence of the solutions. To increase search capabilities and handle difficult real-world problems, we enhance the RKO algorithm using the LEO concept. The proposed IRKO algorithm follows the RKO algorithm procedure step-by-step. LEO improves the diversification and intensification phase of the RKO algorithm along with the RK method. The initialization phase of the IRKO is similar to the RKO algorithm.
To examine the suggested solutions and improve the new suggested solutions in the next iteration, it is necessary to analyze the solutions in each iteration. As a result, each population is assessed to acquire its solution when each new population is randomly produced, an update that permits the new fitness to be upgraded using Eq. (3). Then, the new population is improved by the LEO mechanism, as mentioned in Eq. (12). Finally, the best position is achieved after updating the population using Eqs. (7) and (11) as similar to the RKO algorithm. Iterate the suggested IRKO until it meets the terminating condition. After that, the best option so far is discovered. The flowchart of the IRKO is shown in Fig. 1.

Benchmark Test Suites
Twenty-three benchmark functions, including unimodal, multi-modal, and fixed dimension multi-modal functions, were used to examine the IRKO's ability to exploit global solutions, explore the search space, and escape from the local minima trap. The benchmark test functions are provided in the weblink [39] for the reader's reference, in which the exploitation potential is assessed using the unimodal benchmarks (F1-F7), multi-modal benchmarks (F8-F13) meant to assess the exploration ability with a dimension of 30, and fixed dimension multi-modal test benchmarks (F14-F23) demonstrate the ability to explore low-dimensional search spaces. Several well-known optimization algorithms, such as GWO, HHO, SCA, IMO, HGSO, RKO, and WOA, have been evaluated against the same functions to confirm the superiority of the suggested IRKO algorithm. The simulation was carried out using a MATLAB 2020a installed on a laptop with an Intel Core i5 processor, 2.4 GHz clock speed, and 8 GB of RAM.
To confirm the performance of the proposed IRKO algorithm, the balance phase curve, average fitness curve, and trajectory curves are utilized and have been in Fig. 2. Fig. 2 shows the qualitative measures, such as the 2-D figure of all test functions (first column) and search history to confer the solutions in the search space (second column). The trajectory curves, balancing phase curves, and the average fitness curves are displayed in the third, fourth, and fifth columns of The search's history explains how the algorithm search for the optimal solution in search space. The LEO concept enhances the behavior of the IRKO. This modality depicts the movement of the population about the best position for unimodal functions. The nature of the dispersion qualities of populations corresponds to the modality. When using IRKO to address multi-modal and unimodal functions, the exploration and exploitation capabilities are all enhanced. The trajectory curve indicated a high amplitude and frequency in the initial iterations and disappeared during the final iteration. As can be seen, IRKO has a high exploration ability early on and strong exploitation later on. Based on this behavior, the ideal approach is likely for the IRKO algorithm. The LEO in IRKO algorithm promotes the search optimization process to precisely and broadly focus on the local region. Compared to other recently proven methods, the LEO aids the RKO in efficiently and accurately discovering the search space. Fig. 3 illustrates the best solutions achieved so far throughout the iteration numbers. IRKO's convergence curves show a visible decay rate for unimodal functions. Meanwhile, other algorithms have a severe standstill, making the exploitation stage of the IRKO have a reliable exploration capability. The IRKO's multi-modal functions' convergence curves indicate the seamless transition between exploitation and exploration.
It converges quickly with better results than the other counterparts in most of the remaining test functions. This is evident when it's understood that the IRKO performs well between exploitation and exploration stages since it captures nearby values for the best solutions. These solutions are exploited proficiently throughout the iterations to offer the best solutions. In addition, the reliability of all selected algorithms is assessed by visualizing the boxplot. Fig. 4 shows the boxplot analysis of all algorithms on selected benchmarks. It is visualized that the reliability of the proposed IRKO is superior to all other selected algorithms. To rank all algorithms, Friedman's rank test is carried out, and the FRT values of all algorithms are listed in Tab. 2. It is observed from Tab. 2 that the proposed IRKO algorithm stands first in most of the benchmark functions.

Real-World Optimization Problems
In this subsection, the IRKO algorithm has been tested in addressing three real-world engineering design problems, including welded beam, tension/compression spring, and pressure vessel.

Welded Beam Design
The primary objective of the problem depicted in Fig. 5 is to minimize the cost of the welded beam and find the best cost by considering the constraints. The design variables are length (l), the thickness of the bar (b), the thickness of the weld (h), and height (t). It has constraints, such as beam end deflection (δ), bar buckling load (P c ), shear stress (τ ), beam blending stress (θ), and side constraints, and consider The variable ranges are 0.1 x 1 2, 0.1 x 2 10, 0.1 x 3 10, and 0.1 x 4 2, and design values are P = 6000lb, L = 14in., δ max = 0.25in., E = 30 × 1 6 psi, G = 12 × 10 6 psi, and τ max = 13600psi, σ max = 30000psi. The objective function is given in Eq. (16).
Subjected to constraints: Tab. 3 lists the results obtained by the IRKO and other selected algorithms, including RKO, GWO, HHO, WOA, IMO, HGSO, and SCA. From Tab. 3, it can be seen that the IRKO performing better than all selected methods and attained the minimum cost. Tab. 3 also lists the statistical data, such as Min, Mean, STD, and RT. Therefore, it is determined that the reliability of the suggested IRKO algorithm is better for the engineering design problem. Fig. 6 depicts the convergence curves and box plots of all algorithms while handling the problem. In addition, FRT values obtained by all algorithms are also listed. The proposed IRKO algorithm stands first in solving the welded beam design problem also.

Tension/Compression Spring Design
Compression spring design has been considered as yet alternative traditional mechanical engineering problem. This problem is illustrated in Fig. 7, and the primary objective of this problem is to minimize the tension spring weight of the framework.
Tab. 4 lists the results obtained by the IRKO and other selected algorithms, including RKO, GWO, HHO, WOA, IMO, HGSO, and SCA. From Tab. 4, it can be seen that the IRKO performing better than all other selected algorithms and attained the minimum tension spring weight. Tab. 4 also lists the statistical data, such as Min, Mean, STD, and RT. Therefore, it is determined that the reliability of the suggested IRKO algorithm is better for the compression spring design problem. Fig. 8 depicts the convergence curves and box plots of all algorithms while handling the problem. In addition, FRT values obtained by all algorithms are also listed. The proposed IRKO algorithm stands first in solving the tension/compression spring design problem also.

Pressure Vessel Design
The graphic view of the pressure vessel design framework is shown in Fig. 9. The pressure vessel has hemispherical heads and capped ends. The key objective is to minimize the construction cost. It has four constraints and four parameters (i.e., the length of the cylindrical section (L), the thickness of the head (T h ), the inner radius (R), and the thickness of the shell (T s )), and consider The variable ranges are 0 ≤ x i ≤ 99, i = 1, 2 and 10 ≤ x i ≤ 200, i = 3, 4. The objective function is given in Eq. (20).
Subjected to constraints:  Tab. 5 lists the results found by the IRKO and several selected algorithms, including RKO, GWO, HHO, WOA, IMO, HGSO, and SCA. From Tab. 5, it can be seen that the IRKO performing better than all other selected algorithms and attained the minimum construction cost. Tab. 5 also lists the statistical data, such as Min, Mean, STD, and RT. Therefore, it is concluded that the reliability of the proposed IRKO algorithm is better for the pressure vessel design problem. Fig. 10 depicts the convergence curves and box plots of all algorithms while handling the problem. In addition, FRT values obtained by all algorithms are also listed. The proposed IRKO algorithm stands first in solving the pressure vessel design problem.

Conclusion
In this paper, an enhanced variant of the RKO algorithm called IRKO algorithm is proposed. A local escaping operator concept is employed to improve the characteristics, i.e., exploration and exploitation capabilities of the original RKO algorithm. A 23 classical test suite and three engineering design problems-welded beam, tension/compression spring, and pressure vessel design problems are used to assess the performance of the IRKO algorithm. According to benchmark tests' statistical findings, the IRKO produced outcomes that were either superior or relatively close to other selected algorithms. Furthermore, it may assess the proposed IRKO's suitability to real-world applications based on actual investigations of design problems.
From the earlier analysis, the IRKO might enable a broad range of future tasks. This includes applying the IRKO algorithm in numerous applications, such as image processing, feature selection, PV parameter estimation, power systems, power electronics, smart grid, big data applications, data mining applications, signal denoising, wireless sensor networks, artificial intelligence, machine learning, and other benchmark functions. Also applicable to situations depending on binary-, multi-, and many-objective optimizations.