[BACK]
images Computer Modeling in Engineering & Sciences images

DOI: 10.32604/cmes.2022.019198

ARTICLE

An Improved Gorilla Troops Optimizer Based on Lens Opposition-Based Learning and Adaptive β-Hill Climbing for Global Optimization

Yaning Xiao, Xue Sun*, Yanling Guo, Sanping Li, Yapeng Zhang and Yangwei Wang

College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin, 150040, China
*Corresponding Author: Xue Sun. Email: xuesun@hit.edu.cn
Received: 08 September 2021; Accepted: 28 October 2021

Abstract: Gorilla troops optimizer (GTO) is a newly developed meta-heuristic algorithm, which is inspired by the collective lifestyle and social intelligence of gorillas. Similar to other metaheuristics, the convergence accuracy and stability of GTO will deteriorate when the optimization problems to be solved become more complex and flexible. To overcome these defects and achieve better performance, this paper proposes an improved gorilla troops optimizer (IGTO). First, Circle chaotic mapping is introduced to initialize the positions of gorillas, which facilitates the population diversity and establishes a good foundation for global search. Then, in order to avoid getting trapped in the local optimum, the lens opposition-based learning mechanism is adopted to expand the search ranges. Besides, a novel local search-based algorithm, namely adaptive β-hill climbing, is amalgamated with GTO to increase the final solution precision. Attributed to three improvements, the exploration and exploitation capabilities of the basic GTO are greatly enhanced. The performance of the proposed algorithm is comprehensively evaluated and analyzed on 19 classical benchmark functions. The numerical and statistical results demonstrate that IGTO can provide better solution quality, local optimum avoidance, and robustness compared with the basic GTO and five other well-known algorithms. Moreover, the applicability of IGTO is further proved through resolving four engineering design problems and training multilayer perceptron. The experimental results suggest that IGTO exhibits remarkable competitive performance and promising prospects in real-world tasks.

Keywords: Gorilla troops optimizer; circle chaotic mapping; lens opposition-based learning; adaptive -hill climbing

1  Introduction

Optimization refers to the process of searching for the optimal solution to a particular issue under certain constraints, so as to maximize benefits, performance and productivity [14]. With the help of optimization techniques, a large number of problems encountered in different applied disciplines could be solved in a more efficient, accurate, and real-time way [5, 6]. However, with the increasing complexity of global optimization problems nowadays, conventional mathematical methods based on gradient information are challenged by high-dimensional, suboptimal regions, and large-scale search ranges that cannot adapt to the real requirements [7, 8]. The development of more effective tools to settle these complex NP-hard problems is an indivisible research hotspot. Compared to traditional approaches, meta-heuristic algorithms (MAs) are often able to obtain the global best results on such problems, which is attributed to the merits of their simple structure, ease of implementation, as well as strong capability to bypass the local optimum [9, 10]. As a result, during the past few decades, MAs have entered the blowout stage and received major attention from worldwide scholars [1113].

MAs find out the optimal solution through the simulation of stochastic phenomena in nature. Based on the different design concepts, the nature-inspired MAs may be generally classified into four categories [1416]: evolution-based, physical-based, swarm-based, and human-based algorithms. Specifically, evolutionary algorithms emulate the laws of Darwinian natural selection theory, and some well-regarded cases of which are Genetic Algorithm (GA) [17], Differential Evolution (DE) [18], and Biogeography-Based Optimization (BBO) [19]. Physical-based algorithms simulate the physical phenomenon of the universe such as Simulated Annealing (SA) [20], Multi-Verse Optimizer (MVO) [21], Thermal Exchange Optimization (TEO) [22], Atom Search Optimization (ASO) [23], and Equilibrium Optimizer (EO) [24], etc. Swarm-based algorithms primarily originate from the collective behaviours of social creatures. A remarkable embodiment of this category of algorithms is Particle Swarm Optimization (PSO) [25], which was first proposed in 1995 based on the foraging behaviour of birds. Ant Colony Optimization (ACO) [26], Chicken Swarm Optimization (CSO) [27], Dragonfly Algorithm (DA) [28], Whale Optimization Algorithm (WOA) [29], Spotted Hyena Optimizer (SHO) [30], Emperor Penguin Optimizer (EPO) [31], Seagull Optimization Algorithm (SOA) [32], Harris Hawks Optimization (HHO) [33], Tunicate Swarm Algorithm (TSA) [34], Sooty Tern Optimization Algorithm (STOA) [35], Slime Mould Algorithm (SMA) [36], Rat Swarm Optimizer (RSO) [37], and Aquila Optimizer (AO) [38] are also essential parts in this branch. The final type is influenced by human learning habits including Search Group Algorithm (SGA) [39], Soccer League Competition Algorithm (SLC) [40], and Teaching-Learning-Based Optimization (TLBO) [41].

With their own distinctive characteristics, these metaheuristics are commonly used in a variety of computing science fields, such as fault diagnosis [42], feature selection [43], engineering optimization [44], path planning [45], and parameters identification [46]. Nevertheless, it has been shown that the most basic algorithms still have the limitations of slow convergence, poor accuracy, and ease of getting trapped into the local optimum [7, 15] in several applications. The non-free lunch (NFL) theorem indicates that there is no general algorithm that could be appropriate for all optimization tasks [47]. Hence, encouraged by this theorem, many scholars begin improving existing algorithms to generate higher-quality solutions from different aspects. Fan et al. [7] proposed an enhanced Equilibrium Optimizer (m-EO) algorithm based on reverse learning and novel updating mechanisms, which considerably increase its convergence speed and precision. Jia et al. [48] introduced the dynamic control parameter and mutation strategies into the Harris Hawks Optimization, and then proposed a novel method called DHHO/M to segment satellite images. Ding et al. [49] constructed an improved Whale Optimization Algorithm (LNAWOA) for continuous optimization, in which the nonlinear convergence factor is utilized to speed up the convergence. Besides, authors in [50] employed Lévy flight and crossover operation to further promote the robust and global exploration capability of the native Salp Swarm Algorithm. Recently, there is also an emerging trend to combine two prospective MAs to overcome the performance drawbacks of one single algorithm. For instance, Abdel-Basset et al. [51] incorporated Slime Mould Optimizer and Whale Optimization Algorithm into an efficient hybrid algorithm (HSMAWOA) for image segmentation of chest X-ray to determine whether a person is infected with the COVID-19 virus. Fan et al. [9] proposed a new hybrid algorithm named ESSAWOA, which has been successfully applied to solve structural design problems. Moreover, Liu et al. [52] developed a hybrid imperialist competitive evolutionary algorithm and used it to find out the best portfolio solutions. Dhiman [53] constructed a hybrid bio-inspired Emperor Penguin and Salp Swarm Algorithm (ESA) for numerical optimization that effectively deals with different constraint problems in engineering optimization.

In this study, we focus on a novel swarm intelligent algorithm namely Gorilla Troops Optimizer (GTO), which was proposed by Abdollahzadeh et al. in 2021 [54]. The inspiration of GTO originates from the collective lifestyle and social intelligence of gorillas. Preliminary research indicates that GTO has excellent performances on benchmark function optimization. Nevertheless, similar to other meta-heuristic algorithms, it still suffers from low optimization accuracy, premature convergence, and the propensity to fall into the local optimum when solving complex optimization problems [55]. These defects are mainly associated with the poor quality of the initial population, lack of a proper balance between the exploration and exploitation, and low likelihood of large spatial leaps in the iteration process. Therefore, NFL theorem motivates us to improve this latest swarm-inspired algorithm.

In view of the above discussion, to enhance GTO for global optimization, an improved gorilla troops optimizer known as IGTO is developed in this paper by incorporating three improvements. Firstly, Circle chaotic mapping is utilized to replace the random initialization mode of GTO for enriching population diversity. Secondly, a novel lens opposition-based learning mechanism is adopted to boost the exploration capability of the algorithm, while avoiding falling into the local optimum. Additionally, adaptive β-hill climbing, a new local search algorithm is embedded into GTO to facilitate better solution accuracy and exploitation trends. The effectiveness of the proposed IGTO is comprehensively evaluated and investigated by a series of comparisons with the basic GTO and several state-of-the-art algorithms, including GWO, WOA, SSA, HHO, and SMA on 19 classical benchmark functions. The experimental results demonstrate that IGTO performs better than the other competitors in terms of solution quality, convergence accuracy and stability. In addition, to further validate its applicability, IGTO is applied to solve four engineering design problems and train multilayer perceptron. Our results reveal that the proposed method also has strong competitiveness and superiority in real-life applications.

The remainder of this paper is arranged as follows: the basic GTO algorithm is briefly described in Section 2. In Section 3, a detailed description of three improved mechanisms and the proposed IGTO is presented. In Section 4, the experimental results of benchmark function optimization are reported and discussed. Besides, the applicability of the IGTO for resolving practical engineering problems and training multilayer perceptron is highlighted and analyzed in Sections 5 and 6. Finally, the conclusion of this work and potential future work directions are given in Section 7.

2  Gorilla Troops Optimizer

Gorilla troops optimizer is a recently proposed nature-inspired and gradient-free optimization algorithm, which emulates the gorillas’ lifestyle in the group [54]. The gorilla lives in a group called troop, composed of an adult male gorilla also known as the silverback, multiple adult female gorillas and their posterity. A silverback gorilla (shown in Fig. 1) typically has an age of more than 12 years and is named for the unique hair on his back at puberty. Besides, the silverback is the head of the whole troop, taking all decisions, mediating disputes, directing others to food resources, determining group movements, and being responsible for safety. Younger male gorillas at the age of 8 to 12 years are called blackbacks since they still lack silver-coloured back hairs. They are affiliated with the silverback and act as backup defenders for the group. In general, both female and male gorillas tend to migrate from the group where they were born to a second new group. Alternatively, mature male gorillas are also likely to separate from their original group and constitute troops for their own by attracting migrating females. However, some male gorillas sometimes choose to stay in the initial troop and continue to follow the silverback. If the silverback dies, these males might engage in a brutal battle for dominance of the group and mating with adult females. Based on the above concept of gorillas group behaviour in nature, the specific mathematical model for the GTO algorithm is developed. As with other intelligent algorithms, GTO contains three main parts: initialization, global exploration, and local exploitation, which are explained thoroughly below.

images

Figure 1: Silverback gorilla [54]

2.1 Initialization Phase

Suppose there are N gorillas in the D-dimensional space. The position of the i-th gorilla in the space can be defined as Xi = (xi, 1, xi, 2, ⋯, xi, D), i = 1, 2, ⋯, N. Thus, the initialization process of gorilla populations can be described as:

XN×D=rand(N,D)×(ublb)+lb(1)

where ub and lb are the upper and lower boundaries of the search range, respectively, and rand(N, D) denotes the matrix with N rows and D columns, where each element is a random number between 0 and 1.

2.2 Exploration Phase

Once gorillas depart from their original troop, they will move to diverse environments in nature that they might or might not have ever seen before. In the GTO algorithm, all gorillas are considered as candidate solutions, and the optimal solution in each optimization process is deemed to be the silverback. In order to accurately simulate such natural behaviour of migration, the position update equation of the gorilla for the exploration stage was designed by employing three different approaches including migrating towards unknown positions, migrating around familiar locations, and moving to other groups, as shown in Eq. (2):

GX(t+1)={(ublb)×r2+lb,r1<p(r3C)×XA(t)+L×Z×X(t),r10.5X(t)L×(L×(X(t)XB(t))+r4×(X(t)XB(t))),r1<0.5(2)

where t indicates the current iteration times, X(t) denotes the current position vector of the individual gorilla, and GX(t + 1) refers to the candidate position of search agents in the next iteration. Besides, r1, r2, r3 and r4 are all the random values between 0 and 1. XA(t) and XB(t) are two randomly selected gorilla positions in the current population. p is a constant. Z denotes a row vector in the problem dimension with values of the element are randomly generated in [ − C, C]. And the parameter C is calculated according to Eq. (3).

C=(cos(2×r5)+1)×(1tMaxiter)(3)

where cos(·) represents the cosine function, r5 is a random number in the range of 0 to 1, and Maxiter indicates the maximum iterations.

As for the parameter L in Eq. (2) could be computed as follows:

L=C×l(4)

where l is a random number in between [ −1, 1].

Upon the completion of the exploration, the fitness values of all newly generated candidate GX(t + 1) solutions are evaluated. Provided that GX is better than X i.e., F(GX) < F(X), where F(·) denotes the fitness function for a certain problem, it will be retained and replace the original solution X(t). In addition, the optimal solution at this period is selected as the silverback Xsilverback.

2.3 Exploitation Phase

When the troop was just established, the silverback is powerful and healthy, while the others male gorillas are still young. They obey all the decisions of silverback in search of diverse food resources and serve the silverback gorilla faithfully. Undeniably speaking, the silverback also grows old and then finally dies, with younger blackbacks in the troop might get involved into a violent conflict with the other males for mating with the adult females and the leadership. As mentioned previously, two behaviours of following the silverback and competing for adult female gorillas are modelled in the exploitation phase of GTO. At the same time, the parameter W is introduced to control the switch between them. If the value C in Eq. (3) is greater than W, the first mechanism of following the silverback is elected. The mathematical expression is as follows:

GX(t+1)=L×M×(X(t)Xsilverback)+X(t)(5)

where L is also evaluated using Eq. (4), Xsilverback represents the best solution obtained so far, and X(t) denotes the current position vector. In addition, the parameter M could be calculated according to Eq. (6):

M=(|i=1NXi(t)/N|2L)12L(6)

where N refers to the population size, and Xi(t) denotes each position vector of the gorilla in the current iteration.

If C < W, it implies that the latter mechanism is chosen, in this case, the location of gorillas can be updated as follows:

GX(t+1)=Xsilverback(Xsilverback×QX(t)×Q)×A(7)

Q=2×r61(8)

A=φ×E(9)

E={N1,r70.5N2,r7<0.5(10)

In Eq. (7), X(t) denotes the current position and Q stands for the impact force, which is computed using Eq. (8). In Eq. (8), r6 is a random value in the range of 0 to 1. Moreover, the coefficient A used to mimic the violence intensity in the competition is evaluated by Eq. (9), where ϕ denotes a constant and the values of E are assigned with Eq. (10). In Eq. (10), r7 is also a random number in [0, 1]. If r7 0.5, E would be defined as a 1-by-D array of normal distribution random numbers, and D is the spatial dimension. Instead, if r7 < 0.5, E would be equal to a stochastic number that obeys the normal distribution.

Similarly, at the end of the exploitation process, the fitness values of the newly generated candidate GX(t + 1) solution are also calculated. If F(GX) < F(X), the solution GX will be preserved and participate in the subsequent optimization, while the optimal solution within all individuals is defined as the silverback Xsilverback. The pseudo-code of GTO is shown in Algorithm 1.

images

3  The Proposed IGTO Algorithm

In order to further improve the performance of the basic GTO algorithm for global optimization, a novel variant named IGTO is presented in this section. First, Circle chaotic mapping is adopted to initialize the gorilla populations, which is considered from increasing the population diversity. Second, an effective lens opposition-based learning strategy is implemented to expand the search range and avoid the algorithm falling into the local optimum. Final, the modified algorithm is hybridized with the adaptive β-hill climbing for better exploitation trend and solution quality. The specific process is figured out as follows.

3.1 Circle Chaotic Mapping

It is indicated that the quality of the initial population individuals has a significant impact on the efficiency of most current metaheuristic algorithms [49, 56]. When applying the GTO algorithm to tackle an optimization problem, the population is usually initialized by means of a stochastic search. Though this method is accessible to implement, yet it suffers from a lack of ergodicity and excessively depends on the probability distribution, which cannot guarantee that the initial population is uniformly distributed in the search space, thereby deteriorating the solution precision and convergence speed of the algorithm.

Chaotic mapping is a complex dynamic method found in nonlinear systems with the properties of unpredictability, randomness, and ergodicity. Compared to random distribution, chaotic mapping allows the initial population individual to explore the solution space thoroughly with a higher convergence speed and sensitivity so that it is widely adopted to improve the optimization performance of algorithms. Research results have proven that Circle chaotic mapping has superior exploration performance than the commonly used Logistic chaotic mapping and Tent chaotic mapping [57]. Consequently, in order to boost the population diversity and take full advantage of the information in the solution space, Circle chaotic mapping is introduced in this study to improve the initialization mode of the basic GTO. And the mathematical expression of Circle chaotic mapping is as follows:

zk+1=zk+ba2πsin(2πzk)mod(1), zk(0,1)(11)

where a = 0.5 and b = 0.2. Under the same free independent parameters, the random search mechanism and Circle mapping are selected to be executed independently 300 times. Besides, the obtained results are shown in Fig. 2. It can be seen from the figure that the traversal of Circle chaotic mapping is wider and more homogeneously distributed in the feasible domain [0, 1] than that of random search. Hence, the proposed algorithm has a more robust global exploration ability after incorporating Circle chaotic mapping.

images

Figure 2: Distributions of random search and circle chaotic mapping. (a) Random distribution (b) Circle distribution

The pseudo-code for initializing the population using Circle chaotic mapping is outlined in Algorithm 2.

images

3.2 Lens Opposition-Based Learning

As a novel technique in the area of smart computing, lens opposition-based learning (LOBL), incorporating traditional opposition-based learning (OBL) [58] and convex lens imaging discipline, has been successfully employed in different intelligent algorithm optimizations [59, 60]. Its basic ideology is to simultaneously calculate and compare the candidate solution and corresponding reverse solution, and then choose the superior one to proceed with the next iteration. Theoretically demonstrated by Fan et al. [9], LOBL can produce a solution close to the global optimum with higher possibility. Therefore, in this study, LOBL is utilized to update the candidate solutions during the exploration phase, in order to enlarge the search range and help the algorithm to escape from the local optimum. Several conceptions about LOBL are represented mathematically as follows.

Lens imaging is a physical optics phenomenon, which refers to the fact that while an object is located at more than two principal focal lengths away from the convex lens, a smaller and inverted image will be produced on the opposite side of the lens. Taking the one-dimensional search space in Fig. 3 for illustration, there is a convex lens with the focal length f set at the base point O (the midpoint of search range [lb, ub]). Besides, an object p with the height h is placed on the coordinate axis, and its projection is GX (the candidate solution). Distance from the object to the lens u is greater than twice f. Through the lens imaging operation, an inverted imaging p of height h* could be attained, which is projected as GX*(the reverse solution) on the x-axis. In accordance with the rules of lens imaging as well as similar triangle, the geometrical relationship obtained from Fig. 3 can be expressed as:

(lb+ub)/2GXGX(lb+ub)/2=hh(12)

images

Figure 3: The principle of LOBL mechanism

Here, let the scale factor n = h/h*, the reverse solution GX* is calculated by transferring the Eq. (12):

GX=lb+ub2+lb+ub2nGXn(13)

It is obvious that when n = 1, Eq. (13) can be simplified as the general formulation of OBL strategy:

GX=lb+ubGX(14)

So, we could regard the opposition-based learning strategy as a peculiar case of LOBL. In comparison to OBL, the latter allows acquiring dynamic reverse solutions and a wider search range by tuning the scale factor n.

Generally, Eq. (13) could be extended into D-dimensional space:

GXj=(lbj+ubj)/2+(lbj+ubj)/2nGXj/n(15)

where lbj and ubj are the lower and upper limits of the j-th dimension, respectively, j = 1, 2, ⋯ D, GXj denotes the inverse solution of GXj in the j-th dimension.

When a new inverse solution is generated, there is no guarantee that it is always better than the current candidate solution as in the gorilla position. Therefore, it is necessary to evaluate the fitness values of the inverse solution and candidate solution, then the fitter one will be selected to continue participating in the subsequent exploitation phase, which is described as follows:

GXnext={GX,if F(GX)<F(GX)GX,otherwise(16)

where GX* indicates the reverse solution generated by LOBL, GX is the current candidate solution, GXnext is the selected gorilla to continue the subsequent position updating, and F(·) denotes the fitness function of the problem. The pseudo-code of lens opposition-based learning mechanism is shown in Algorithm 3.

images

3.3 Adaptive β-Hill Climbing

Adaptive β-hill climbing (AβHC) [61] is a newly proposed local search-based algorithm, which is, in essence, a modified version of β-hill climbing (βHC). Research has found that AβHC provides better performance than many other famous local search algorithms, including Simulated Annealing (SA) [20], Tabu Search (TS) [62], and Variable Neighborhood Search (VNS) [63]. To boost the algorithm’s exploitation ability and the quality of final solutions, AβHC is integrated into the basic GTO to help search the neighborhoods of the optimal solution in this study. And the definition of AβHC is represented mathematically as follows.

For the given current solution Xi = (xi, 1, xi, 2,…, xi, D), AβHC will iteratively generate an enhanced solution Xi=(xi,1,xi,2,,xi,D) on the basis of two control operators: N-operator and β-operator. The {\cal N}-operator first transfers Xi to a new neighborhood solution Xi=(xi,1,xi,2,,xi,D), which is defined in Eqs. (17) and (18) as:

xi,j=xi,j±U(0,1)×N,j=1,2,,D(17)

N(t)=1t1KMaxiter1K(18)

where U(0, 1) denotes a random number in the interval [0, 1], xi, j denotes the value of the decision variable in the j-th dimension, t denotes the current iteration, Maxiter refers to the maximum number of iterations, {\cal N} represents the bandwidth distance between the current solution and its neighbor, D is the spatial dimension, and the parameter K is a constant.

Immediately after, the decision variables of new solution Xi are assigned either from the existing solution or randomly from the available range of β-operator, as follows:

xi,j{xi,r,ifr8<βxi,j,else(19)

β(t)=βmin+(βmaxβmin)×t\; Maxiter\; (20)

where r8 is a random number in the interval [0, 1], xi, r denotes another random number chosen from the possible range of that particular dimension of the problem, β max and β min denote the maximum and minimum values of probability value β ∈ [0, 1], respectively. If the generated solution Xi is better than the current solution under consideration Xi, then Xi is replaced by Xi. The pseudo-code of adaptive β-hill climbing is given in Algorithm 4.

images

3.4 Algorithm Flowchart

Based on the improved mechanisms mentioned in Subsections 3.13.3 above, the flowchart of the proposed IGTO algorithm for global optimization problems is illustrated in Fig. 4. Moreover, Algorithm 5 outlines the pseudo-code of IGTO.

images

images

Figure 4: Flowchart of the proposed IGTO algorithm

4  Experimental Results and Discussion

In this section, a total of 19 benchmark functions from the literature [64] are selected for contrast experiments to comprehensively evaluate the feasibility and effectiveness of the proposed IGTO algorithm. First, the definitions of these benchmark functions, parameter settings, and measurements of algorithm performance are presented. Afterwards, the basic GTO and other five advanced meta-heuristic algorithms, such as GWO [65], WOA [29], SSA [66], HHO [33], and SMA [36], are employed as competitors to validate the improvements and superiority of the proposed algorithm based on the solution accuracy, boxplot, convergence behavior, average computation time, and statistical result. Final, the scalability of IGTO is investigated by solving high dimensional problems. All the simulation experiments are implemented in MATLAB R2014b with Microsoft Windows 7 system, and the hardware platform of the computer is configured as Intel(R) Core(TM) i5-7400 CPU @ 3.00 GHz, and 8 GB of RAM.

4.1 Benchmark Function

The benchmark functions used in this paper could be divided into three various categories: unimodal (UM), multimodal (MM), and fix-dimension multimodal (FM). The unimodal functions (F1F7) contain only one global minimum, which are frequently used to detect the development competence and convergence rate of algorithms. The multimodal functions (F8F13), consisting of several local minima and a single global optimum in the search space, are well suited for assessing the algorithm’s capability to explore and escape from local optima. The fix-dimension multimodal functions (F14F19) are combinations of the previous two forms of functions, but with lower dimensions, and they are designed to evaluate the stability of the algorithm. Table 1 shows the formulations, spatial dimensions, search ranges, and theoretical minimum of these functions. In addition, 3D images for the search space of several typical functions are displayed in Fig. 5.

images

images

Figure 5: Search space of typical benchmark functions in 3D

4.2 Parameter Setting

In order to estimate the performance of the improved IGTO algorithm in solving global optimization problems, we select the basic GTO [54] and five state-of-the-art algorithms, namely GWO [65], WOA [29], SSA [66], HHO [33], and SMA [36]. For fair comparisons, the maximum iteration and population size of seven algorithms are set as 500 and 30, respectively. As per the references [9, 61] and extensive trials, in the proposed IGTO algorithm, we set the scale factor n = 12000, K = 30, β max = 1 and β min = 0.1. Besides, all parameter values of the remaining six algorithms are set the same as those recommended in the original papers, as shown in Table 2. These parameter settings assure the fairness of the comparison experiments by allowing each algorithm to make the most of its optimization property. All algorithms are executed independently 30 times within each benchmark function to decrease accidental error.

images

4.3 Evaluation Criteria of Performance

In this study, two metrics are used to measure the performance of the proposed algorithm including the average fitness value (Avg) and standard deviation (Std) of optimization results. The average fitness value intuitively characterizes the convergence accuracy and the search capability of the algorithm, which is calculated as follows:

Avg=1ni=1nSi(21)

where n denotes the times that an algorithm has run, and Si indicates the obtained result of each operation.

And the standard deviation indicates the deviation degree between the experimental results and the average value. The equation of standard deviation is available as follows:

Std=1n1i=1n(SiAvg)2(22)

4.4 Comparison with IGTO and Other Algorithms

In this subsection, to examine the performance of the proposed algorithm, IGTO is compared with the basic GTO and five other advanced algorithms according to benchmark function optimization results. For fair comparisons, the maximum iteration and population size of seven algorithms are set as 500 and 30, respectively, and the rest parameter settings have been given in Subsection 4.2 above. Meanwhile, each algorithm runs 30 times independently on the test function F1-F19 in Table 1 to decrease random error. The average fitness value (Avg) and standard deviation (Std) of each algorithm obtained from the experiment are reported in Table 3. In general, the closer the average fitness (Avg) to the theoretical minimum, the higher convergence accuracy of the algorithm. While the smaller the value of the standard deviation (Std), the better the stability and robustness of the algorithm.

As seen from Table 3, when solving the unimodal benchmark functions (F1F7), IGTO obtains the global optimal minima with regard to the average fitness on functions F1F4. For function F5, the convergence accuracy of IGTO has a great improvement over its predecessors GTO and it is the winner among all algorithms. For test function F6, the results of IGTO are similar to SSA and GTO, yet still marginally better them. Besides, IGTO shows superior results on function F7 in contrast to other optimizers. In terms of standard deviation, the proposed IGTO has excellent performance on all test problems. Given the properties of the unimodal functions, these results show that IGTO has outstanding search precision and local exploitation potential.

images

The multimodal benchmark functions (F8F13) have many local minima in the search space, so these functions are usually employed to analyze the algorithm’s potential to avoid the local optima. For functions F8, F12 and F13, the average fitness and standard deviation of IGTO are obviously better than the rest of the algorithms. For function F9, IGTO obtains the same global optimal minima as WOA, HHO, SMA, GTO. Moreover, HHO, SMA, GTO and IGTO obtains the same performance on functions F10 and F11. It hopefully validates that the proposed IGTO can effectively bypass the local optimum and find high quality solutions.

The fix-dimension multimodal functions (F14F19) consist of few local optima, which are designed to evaluate the stability of the algorithm in switching between exploration and exploitation processes. As far as the average fitness values are concerned, IGTO performs the same as SMA and GTO on function F14, albeit better than others. For functions F15, F18 and F19, IGTO can generate superior results to all competitors. For function F16, the performance of seven optimizers is identical. Although the result of the proposed IGTO is worse than HHO on function F17, it still ranks second and shows significant improvements over the basic GTO to a certain extent. On the other hand, IGTO achieves the optimal standard deviation on all test cases. This proves that our proposed IGTO is able to keep a better balance between exploration and exploitation.

In view of the above, a summary can be drawn that the proposed multi-strategy combination IGTO algorithm exhibits strong global search capability and is superior to the other six intelligent algorithms in comparison. Benefiting from the hybrid AβHC with GTO operation, the solution precision of IGTO is greatly strengthened. At the same time, LOBL strategy is effective to expand the unknown search area and avoid the algorithm falling into the local optima.

In order to better illustrate the stability of the proposed algorithm, the corresponding boxplots of functions 1, 2, 3, 5 and 6 from UM benchmark functions, functions 9, 10 and 12 from MM benchmark functions, and function 15 selected from FM benchmark functions are shown in Fig. 6. From Fig. 6, it can be seen that IGTO algorithm shows remarkable consistency in most issues with respect to the median, maximum and minimum values compared with the others. In addition, IGTO generates no outliers during the iterations with the more concentrated distribution of convergence values, thereby verifying the strong robustness and superiority of the improved IGTO.

images

Figure 6: Boxplot analysis of different algorithms on partial benchmark functions

Fig. 7 visualizes the convergence curves of different algorithms on nine representative benchmark functions. Likewise, where functions 1, 2, 3, 5 and 6 are unimodal, functions 9, 10 and 12 are multimodal, and function 15 belongs to the fix-dimension multimodal category. From Fig. 7, it is clear that the convergence speed of IGTO is the fastest among all algorithms on functions F1F3, and the proposed algorithm can rapidly reach the global optimal solution at the beginning of the search process. For functions F5 and F6, IGTO has a similar trend to HHO and GTO in the initial stage, but its efficiency is fully demonstrated in the late iterations, and eventually the proposed IGTO obtains the best result. For functions F9, IGTO remains a superior convergence rate and obtains the global optimum within 20 iterations. Although the convergence accuracy of IGTO is the same as that of HHO, SMA and the basic GTO on functions F10, yet it converges more quickly. For function F12, the proposed algorithm is still the champion compared with the remaining six optimizers in terms of final accuracy and speed. Besides, the convergence curves of seven algorithms are pretty close on the fix-dimension multimodal function F15. However, the performance of IGTO is slightly better than the others.

images

Figure 7: Convergence curves of different algorithms on nine benchmark functions

On the basis of experimental results of boxplot analysis and convergence curves, IGTO has a considerable enhancement in convergence speed and stability compared with the basic GTO, which is owed to the good foundation of global search laid by Circle chaotic mapping and LOBL strategy.

The average computation time spent by each algorithm on test functions F1F19 is reported in Table 4. For a more intuitive conclusion, the total runtime of each method is calculated and ranked as follows: SMA(14.118 s)¿IGTO(8.073 s)¿GTO(6.690 s)¿HHO(6.568 s)¿GWO(4.912 s)¿SSA(4.897 s)¿WOA(4.065 s). It can be found that IGTO uses more computation time than GTO, which is the second to last. Compared with the basic GTO algorithm, the introduction of three improved strategies increases the steps of the algorithm and extra time. Of course, the high computation cost of GTO algorithm itself is also a primary cause of this. However, the IGTO takes less time than SMA on most test functions. To improve the solution accuracy, a little more runtime is sacrificed. On the whole, the proposed algorithm is acceptable in view of the optimal search performance, and its limitation is still the need to decrease the computational time.

images

Moreover, since the average fitness (Avg) and standard deviation (Std) of the algorithm after 30 runs are not compared with the results of each run, it is often not accurate to evaluate the performance of an algorithm based only on the mean and standard deviation. To represent the robustness and fairness of the improved algorithm, the Wilcoxon rank-sum test [67], a nonparametric statistical test approach is used to estimate the significant differences between IGTO and other algorithms. For Wilcoxon rank-sum test, the significance level is set to 0.05 and acquired p-values are listed in Table 5. In this table, the sign “+” denotes that IGTO performs significantly better than the corresponding algorithm, “=” denotes that the performance of IGTO is analogous to that of the compared algorithm, “-” denotes that IGTO is poorer than the compared one, and the last line counts the total number of all signs. It can be seen from the table that for the 19 benchmark test functions, the proposed IGTO algorithm outperforms GWO on 19 functions, WOA and SSA on 18 functions, HHO on 16 functions, SMA on 14 functions, and the basic GTO on 13 functions, respectively. Therefore, according to the statistical theory analysis, our proposed IGTO has a significant enhancement over the other algorithms and it is the optimal optimizer among them.

images

Lastly, the mean absolute error (MAE) of all algorithms on 19 benchmark problems is evaluated and ranked. MAE is also a useful statistical tool to reveal the gap between the experimental results and the theoretical values [1], and its mathematical expression is as follows:

MAE=1Ni=1N|aioi|(23)

In Eq. (23), N is the number of benchmark functions used, oi represents the desired value of each test function, and ai is the actual value obtained. The MAE and relative rankings of each algorithm are reported in Table 6. From this table, it is obvious that IGTO outperforms all competitors and the MAE of IGTO is reduced by 2 orders of magnitude compared to GTO, which once again demonstrates the superiority of the proposed algorithm statistically.

images

4.5 Scalability Test

Scalability reflects the execution efficiency of an algorithm in different dimensional spaces. As the dimensions of the optimization problem increase, most current intelligent algorithms are highly prone to be ineffective and subject to “dimensional disaster”. To investigate the scalability of IGTO, the proposed algorithm is utilized to optimize 13 benchmark functions F1F13 in Table 1 with higher dimensions (i.e., 50, 100 and 200 dimensions). The average fitness values (Avg) obtained by the basic GTO and IGTO on each function are reported in Table 7. From the data in the table, it is clear that the convergence accuracy of both algorithms gradually decreases with the increase in dimensions, which is due to the fact that the larger the dimensions, the more elements an algorithm needs to optimize. However, the experimental results of IGTO are consistently superior to GTO on functions F1F8, F12 and F13, and the disparity in optimization performance between them is increasingly obvious as the dimension increases. Besides, it is notable that the proposed IGTO can always obtain the theoretical optimal solution on functions F1F4. For functions F9F11, two algorithms obtain the same performance.

images

The overall results fully prove that IGTO is not only able to solve low-dimensional functions at ease, but also maintain good scalability in high-dimensional functions, that is to say, the performance of IGTO does not deteriorate significantly when tackling high-dimensional problems, and it can still provide high-quality solutions effectively with well exploitation and exploration capabilities.

5  IGTO for Solving Engineering Design Problems

In this section, the applicability of the proposed IGTO is tested by solving four practical engineering design problems including pressure vessel design problem, gear train design problem, welded beam design problem and rolling element bearing design problem. For the sake of convenience, the death penalty [68] function is used here to handle the infeasible solutions subjected to constraints. IGTO runs independently 30 times for each issue, with the maximum iterations and population size are set to 500 and 30, respectively. At last, the obtained results are compared against those of different advanced meta-heuristic algorithms in the literature, as well as the corresponding analysis are presented.

5.1 Pressure Vessel Design

The pressure vessel design problem was first purposed by Kannan et al. [69], the purpose of which is to minimize the overall fabrication cost of a pressure vessel. There are four decision variables involved in this optimum design: Ts (z1, thickness of the shell), Th (z2, thickness of the head), R (z3, inner radius), and L (z4, length of the cylindrical portion). Fig. 8 illustrates the structure of pressure vessel used in this study and its related mathematical model can be defined as follows:

images

Figure 8: Pressure vessel design problem

consider

z=[z1z2z3z4]=[TsThRL]

minimize

f(z)=0.6224z1z3z4+1.7781z2z32+3.1661z12z4+19.84z12z3

subject to

g1(z)=z1+0.0193z30g2(z)=z3+0.00954z30g3(z)=πz32z443πz33+12960000g4(z)=z42400

variable range: 0 ≤ z1 99, 0 ≤ z2 99, 10 ≤ z3 200, 10 ≤ z4 200

The experimental results of IGTO for this problem are compared against those resolved by GTO, SMA [36], HHO [33], AOA [4], SSA, WOA [29], and GWO [65], as shown in Table 8. It is shown that IGTO can provide the best design among all algorithms, and the minimum cost obtained is f(z)min=5904.2189, which corresponds to the optimum solution z=[0.78890.390040.8764192.4031]. Thus, the proposed IGTO algorithm is regarded as more suitable for solving such problem.

images

5.2 Gear Train Design

This is a classical mechanical engineering problem developed by Sandgren [70]. Fig. 9 shows the schematic view of the gear train. As its name suggests, the ultimate aim of this problem is to find four optimal parameters that minimize the gear ratio (z2z3z1z4) as much as possible. The test case can be also represented mathematically as follows:

images

Figure 9: Gear train design problem

consider

z=[z1z2z3z4]=[nAnBnCnD]

minimize

f(z)=(16.931z2z3z1z4)2

variable range: 12 ≤ z1, z2, z3, z4 60

Table 9 reports the detailed results of comparative experiments for the gear train design problem. From the data in Table 9, it is apparent that the proposed IGTO is better than other optimizers in handling this case and effectively finds a brilliant solution.

images

5.3 Welded Beam Design

As its name implies, the purpose of this welded beam design problem is to reduce the total manufacturing cost as much as possible. This optimum design contains four decision parameters: the width of weld (h), the length of the clamped bar (l), the height of the bar (t), and the bar thickness (b). Besides, in the optimization process, several constraints should not be contravened such as bending stress in the beam, buckling load, shear stress and end deflection. The schematic view of this issue is shown in Fig. 10, and the related mathematical formulation is illustrated as follows: consider

z=[z1z2z3z4]=[hltb]

minimize

f(z)=1.10471z12z2+0.04811z3z4(14+z2)

subject to

g1(z)=τ(z)τmax0g2(z)=σσmax0g3(z)=δδmax0g4(z)=z1z40g5(z)=PPc(z)0g6(z)=0.125z10g7(z)=1.10471z21+0.04811z3z4(14+z2)50

variable range: 0.1 ≤ z1, z4 2, 0.1 ≤ z2, z3 10 where

τ(z)=(τ)2+2ττz22R+(τ)2τ=P2z1z2,τ=MRJ,M=P(L+z22)R=z224+(z1+z32)2J=2{2z1z2[z224+(z1+z32)2]}σ(z)=6PLEz32z4,δ(z)=6PL3Ez32z4PC(z)=4.013Ez32z4636L2(1z32LE4G)P=6000lb,L=14in,E=30×106psi,G=12×106psi,δmax=0.25in,τmax=13,600psi,σmax=30,000psi

images

Figure 10: Welded beam design problem

The optimal results of IGTO vs. those achieved by GTO, MVO [21], SSA [66], HHO [33], WOA [29], MTDE [71], ESSWOA [9] are reported in Table 10. As can be seen from Table 10, it is obvious that the proposed IGTO provides better design than majority of other algorithms. The minimum cost f(z)min=1.72485 is obtained with the related optimal solution z=[0.20573.47059.03660.2057]. Therefore, it is justifiable to believe that the proposed IGTO has the superior capability to deal with such problem.

images

5.4 Rolling Element Bearing Design

Unlike the previous problems, the final objective of this issue is to maximize the dynamic load capacity of rolling element bearings as possible. The structure of a rolling element bearing is illustrated in Fig. 11. There is a total of ten structural variables involved in the solution of this optimization problem, namely: pitch diameter (Dm), ball diameter (Db), the number of balls (Z), the inner and outer raceway curvature radius coefficient (fi and fo), Kdmin, Kdmax, δ, e as well as ζ. Mathematically, the description of this problem is given as follows:

maximize

Cd={fcZ2/3Db1.8,ifDb25.4mm3.647fcZ2/3Db1.4,ifDb>25.4mm

subject to

g1(z)=ϕ02sin1(Db/Dm)Z+10g2(z)=2DbKdmin(Dd)>0g3(z)=Kdmax(Dd)2Db0g4(z)=ζBwDb0g5(z)=Dm0.5(D+d)0g6(z)=(0.5+e)(D+d)Dm0g7(z)=0.5(DDmDb)δDb0g8(z)=fi0.515g9(z)=fo0.515

where

fc=37.91[1+{1.04(1γ1+γ)1.72(fi(2fo1)fo(2fi1))0.41}10/3]0.3×[γ0.3(1γ)1.39(1+γ)1/3][2fi2fi1]0.41x=[{(Dd)/23(T/4)}2+{D/2T/4Db}2{d/2+T/4}2]y=2{(Dd)/23(T/4)}{D/2T/4Db}ϕ0=2cos1(xy),γ=DbDm,fi=riDb,fo=roDb,T=Dd2DbD=160,d=90,Bw=30,ri=ro=11.033,0.5(D+d)Dm0.6(D+d)0.15(Dd)Db0.45(Dd),4Z50,0.515fiandfo0.60.4Kdmin0.5,0.6Kdmax0.70.3δ0.4,0.02e0.1,0.6ζ0.85

images

Figure 11: Rolling element bearing design problem

The results of optimum variables and fitness fetched applying different intelligent algorithms are listed in Table 11. Compared with other well-known optimizers, the proposed IGTO reveals the superior quality solution at z=[12521.4188510.941100.5150.5150.40.70.30.020.6] corresponding to the best fitness Cd = 85067.962 with a significant improvement. This case once again highlights the applicability of IGTO algorithm.

images

As a summary, it is reasonable to believe that the proposed IGTO is equally feasible and competitive in practical engineering design problems from the observed results. In addition, the excellent performance in resolving engineering design problems indicates that IGTO is able to be widely used in real-world optimization problems as well.

6  IGTO for Training Multilayer Perceptron

Multilayer perceptron (MLP), as one of the most extensively used artificial neural network models [74], has been successfully implemented for solving various real-world issues such as pattern classification [75] and regression analysis [76]. The MLP is characterized by multiple perceptron, in which there is at least one hidden layer in addition to one input layer and one output layer. The information is received as input on one side of the MLP and the output is supplied from the other side via one-way transmission between nodes in different layers. For the MLP, since the sample data space is mostly high-dimensional and multimodal, at the same time there is also a potential for data interference by noise, data redundancy and data loss. Thus, the main purpose of training the MLP is to update two crucial parameters that dominate the final output: the weights W and biases θ, which is a very challenging optimization problem [15, 77]. In this section, the Balloon and Breast cancer datasets from the University of California at Irvine (UCI) repository [78] are utilized for examining the applicability of the proposed IGTO algorithm for training MLP. Table 12 presents the specification of these datasets.

images

In order to measure the algorithm performance of training the MLP, the average mean square error criteria (MSE¯) for all training samples are defined as follows:

MSE¯=k=1qi=1m(oikdik)2q(24)

In Eq. (24), q represents the number of training samples, m is the number of outputs, and dik and oik denote the desired and actual output for i-th input with k-th training sample is used, respectively. If the actual output data is closer to the desired one, the value of MSE¯ is smaller, which means that the trained model gains a better performance.

Besides the optimization algorithms shown in Table 2, Tunicate Swarm Algorithm (TSA) [34], Sooty Tern Optimization Algorithm (STOA) [35], and Seagull Optimization Algorithm (SOA) [32] are also taken into account in this experiment. The variables are assumed to be in the range of [ −10, 10]. Each optimizer executes independently 10 times, with the maximum iterations and population size are set to 250 and 30, respectively. Meanwhile, the parameters of all algorithms are consistent with the original literature. With regard to the structure of the MLP, the number of nodes in the hidden layer is equal to 2n + 1 as recommended in [74], where n denotes the number of attributes in the dataset. Fig. 12 illustrates an example for the process of training the MLP by IGTO.

images

Figure 12: Training MLP by the proposed IGTO

The MSE¯ and classification accuracy attained by each method on the datasets are listed in Table 13. In consideration of the simplicity of the Ballon dataset, all optimisers have achieved 100% classification accuracy except WOA and SMA, yet the proposed IGTO provides a better value of MSE¯ than the others. In relation to the Breast cancer dataset, it is obvious that IGTO still obtains the best result with the MSE¯ of 7.35E–04% and 100% classification accuracy.

All these results demonstrate that the proposed algorithm has a stable and consistent ability to get rid of the local optimum and eventually find the global minima in the complex search space. Besides, this case also highlights the applicability of IGTO algorithm. IGTO is capable of finding more suitable crucial parameters for MLP, thus making it perform better.

images

7  Conclusion and Future Work

In this paper, a novel improved version of the basic gorilla troops algorithm named IGTO was put forward to solve complex global optimization problems. First, Circle chaotic mapping was introduced to enhance the diversity of the initial gorilla population. Second, the lens opposition-based learning strategy was adopted to expand the search domain, thus avoiding the algorithm falling into the local optima. Moreover, the adaptive β-hill climbing algorithm was hybridized with GTO to boost the quality of final solutions. In order to evaluate the effectiveness of the proposed algorithm, IGTO was compared with the basic GTO and five other state-of-the-art algorithms based on 19 classical benchmark functions, including unimodal, multimodal, and fix-dimension multimodal functions. Besides, the non-parametric Wilcoxon’s rank-sum test and average absolute error (MAE) were used to analyze the experimental results. The statistical results demonstrate that the proposed IGTO algorithm provides better local optimum avoidance, solution quality, and robustness than the other competitors. Three improvements can significantly boost the performance of IGTO. In order to further test the applicability of IGTO in real-world applications, IGTO was applied to solve four engineering design problems and train multilayer perceptron. The experimental results show that IGTO has strong competitive performance in terms of optimization accuracy.

Nevertheless, as mentioned in the experiment section above, IGTO still has the main limitation of high computation time, which needs to be improved. It is believed that this situation could be mitigated via the introduction of several parallel mechanisms, e.g., master-slave model, cell model and coordination strategy.

In the future work, we will aim to further enhance the solution accuracy of IGTO while reducing the total process consumption. Also, we plan to further investigate the impact of the lens opposition-based learning and adaptive β-climbing strategies on the performance of other meta-heuristic algorithms. In addition, we hope to apply the proposed technique to solve more practical problems, such as the parameter self-tuning of speed proportional integral differential (PID) controller for brushless direct current motors, the global path planning for autonomous underwater vehicles in a complex environment, and the maximum power point tracking of solar photovoltaic systems.

Acknowledgement: The authors are grateful to the editor and reviewers for their constructive comments and suggestions, which have improved the presentation.

Funding Statement: This work is financially supported by the Fundamental Research Funds for the Central Universities under Grant 2572014BB06.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. Zhang, X., Zhao, K., & Niu, Y. (2020). Improved harris hawks optimization based on adaptive cooperative foraging and dispersed doraging strategies. IEEE Access, 8, 160297-160314. [Google Scholar] [CrossRef]
  2. Birogul, S. (2019). Hybrid harris hawk optimization based on differential evolution (HHODE) algorithm for optimal power flow problem. IEEE Access, 7, 184468-184488. [Google Scholar] [CrossRef]
  3. Hussain, K., Salleh, M. N. M., Cheng, S., & Shi, Y. H. (2019). Metaheuristic research: A comprehensive survey. Artificial Intelligence Review, 52(4), 2191-2233. [Google Scholar] [CrossRef]
  4. Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M., & Gandomi, A. H. (2021). The arithmetic optimization algorithm. Computer Methods in Applied Mechanics and Engineering, 376, 113609. [Google Scholar] [CrossRef]
  5. Liang, J., Xu, W., Yue, C., Yu, K., & Song, H. (2019). Multimodal multiobjective optimization with differential evolution. Swarm and Evolutionary Computation, 44, 1028-1059. [Google Scholar] [CrossRef]
  6. Nadimi-Shahraki, M. H., Taghian, S., & Mirjalili, S. (2021). An improved grey wolf optimizer for solving engineering problems. Expert Systems with Applications, 166, 113917. [Google Scholar] [CrossRef]
  7. Fan, Q., Huang, H., Yang, K., Zhang, S., & Yao, L. (2021). A modified equilibrium optimizer using opposition-based learning and novel update rules. Expert Systems with Applications, 170, 114575. [Google Scholar] [CrossRef]
  8. Boussaid, I., Lepagnot, J., & Siarry, P. (2013). A survey on optimization metaheuristics. Information Sciences, 237, 82-117. [Google Scholar] [CrossRef]
  9. Fan, Q., Chen, Z., Zhang, W., Fang, X. (2020). ESSAWOA: Enhanced whale optimization algorithm integrated with salp swarm algorithm for global optimization. Engineering with Computers, DOI 10.1007/s00366-020-01189-3. [CrossRef]
  10. Dokeroglu, T., Sevinc, E., Kucukyilmaz, T., & Cosar, A. (2019). A survey on new generation metaheuristic algorithms. Computers & Industrial Engineering, 137, 106040. [Google Scholar] [CrossRef]
  11. Slowik, A., & Kwasnicka, H. (2018). Nature inspired methods and their industry applications–-swarm intelligence algorithms. IEEE Transactions on Industrial Informatics, 14(3), 1004-1015. [Google Scholar] [CrossRef]
  12. Abualigah, L., Alsalibi, B., Shehab, M., Alshinwan, M., & Khasawneh, A. M. (2020). A parallel hybrid krill herd algorithm for feature selection. International Journal of Machine Learning and Cybernetics, 12(3), 783-806. [Google Scholar] [CrossRef]
  13. Debnath, S., Baishya, S., Sen, D., & Arif, W. (2020). A hybrid memory-based dragonfly algorithm with differential evolution for engineering application. Engineering with Computers, 37(4), 2775-2802. [Google Scholar] [CrossRef]
  14. Nguyen, T. T., Wang, H. J., Dao, T. K., Pan, J. S., & Liu, J. H. (2020). An improved slime mold algorithm and its application for optimal operation of cascade hydropower stations. IEEE Access, 8, 226754-226772. [Google Scholar] [CrossRef]
  15. Jia, H., Sun, K., Zhang, W., Leng, X. (2021). An enhanced chimp optimization algorithm for continuous optimization domains. Complex & Intelligent Systems, DOI 10.1007/s40747-021-00346-5. [CrossRef]
  16. Dehghani, M., Montazeri, Z., Givi, H., Guerrero, J., & Dhiman, G. (2020). Darts game optimizer: A new optimization technique based on darts game. International Journal of Intelligent Engineering and Systems, 13(5), 286-294. [Google Scholar] [CrossRef]
  17. Hamed, A. Y., Alkinani, M. H., & Hassan, M. R. (2020). A genetic algorithm optimization for multi-objective multicast routing. Intelligent Automation & Soft Computing, 26(6), 1201-1216. [Google Scholar] [CrossRef]
  18. Jiang, A., Guo, X., Zheng, S., & Xu, M. (2021). Parameters identification of tunnel jointed surrounding rock based on Gaussian process regression optimized by difference evolution algorithm. Computer Modeling in Engineering & Sciences, 127(3), 1177-1199. [Google Scholar] [CrossRef]
  19. Simon, D. (2008). Biogeography-based optimization. IEEE Transactions on Evolutionary Computation, 12(6), 702-713. [Google Scholar] [CrossRef]
  20. Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671-680. [Google Scholar] [CrossRef]
  21. Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2015). Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Computing and Applications, 27(2), 495-513. [Google Scholar] [CrossRef]
  22. Kaveh, A., & Dadras, A. (2017). A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Advances in Engineering Software, 110, 69-84. [Google Scholar] [CrossRef]
  23. Zhao, W., Wang, L., & Zhang, Z. (2019). Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowledge-Based Systems, 163, 283-304. [Google Scholar] [CrossRef]
  24. Faramarzi, A., Heidarinejad, M., Stephens, B., & Mirjalili, S. (2020). Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems, 191, 105190. [Google Scholar] [CrossRef]
  25. Shi, Y., Eberhart, R. C. (1999). Empirical study of particle swarm optimization. Proceedings of the 1999 Congress on Evolutionary Computation, pp. 1945–1950. Washington, DC, USA.
  26. Dorigo, M., Birattari, M., & Stutzle, T. (2006). Ant colony optimization. IEEE Computational Intelligence Magazine, 1(4), 28-39. [Google Scholar] [CrossRef]
  27. Meng, X., Liu, Y., Gao, X., Zhang, H. (2014). A new bio-inspired algorithm: Chicken swarm optimization. International Conference in Swarm Intelligence, pp. 86–94. Cham, Switzerland: Springer. DOI 10.1007/978-3-319-11857-4_10. [CrossRef]
  28. Mirjalili, S. (2016). Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications, 27(4), 1053-1073. [Google Scholar] [CrossRef]
  29. Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in Engineering Software, 95, 51-67. [Google Scholar] [CrossRef]
  30. Dhiman, G., & Kumar, V. (2017). Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Advances in Engineering Software, 114, 48-70. [Google Scholar] [CrossRef]
  31. Dhiman, G., & Kumar, V. (2018). Emperor penguin optimizer: A bio-inspired algorithm for engineering problems. Knowledge-Based Systems, 159, 20-50. [Google Scholar] [CrossRef]
  32. Dhiman, G., & Kumar, V. (2019). Seagull optimization algorithm: Theory and its applications for large- scale industrial engineering problems. Knowledge-Based Systems, 165, 169-196. [Google Scholar] [CrossRef]
  33. Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., & Mafarja, M. (2019). Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems, 97, 849-872. [Google Scholar] [CrossRef]
  34. Kaur, S., Awasthi, L. K., Sangal, A. L., & Dhiman, G. (2020). Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Engineering Applications of Artificial Intelligence, 90, 103541. [Google Scholar] [CrossRef]
  35. Dhiman, G., & Kaur, A. (2019). STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Engineering Applications of Artificial Intelligence, 82, 148-174. [Google Scholar] [CrossRef]
  36. Li, S., Chen, H., Wang, M., Heidari, A. A., & Mirjalili, S. (2020). Slime mould algorithm: A new method for stochastic optimization. Future Generation Computer Systems, 111, 300-323. [Google Scholar] [CrossRef]
  37. Dhiman, G., Garg, M., Nagar, A., Kumar, V., & Dehghani, M. (2020). A novel algorithm for global optimization: Rat swarm optimizer. Journal of Ambient Intelligence and Humanized Computing, 12(8), 8457-8482. [Google Scholar] [CrossRef]
  38. Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A. A., & Al-qaness, M. A. A. (2021). Aquila optimizer: A novel meta-heuristic optimization algorithm. Computers & Industrial Engineering, 157, 107250. [Google Scholar] [CrossRef]
  39. Gonçalves, M. S., Lopez, R. H., & Miguel, L. F. F. (2015). Search group algorithm: A new metaheuristic method for the optimization of truss structures. Computers & Structures, 153, 165-184. [Google Scholar] [CrossRef]
  40. Moosavian, N., & Roodsari, B. K. (2014). Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm and Evolutionary Computation, 17, 14-24. [Google Scholar] [CrossRef]
  41. Rao, R. V., & Savsani, V. J. (2011). Teaching-learning-based optimization: A novel method for con-strained mechanical design optimization problems. Computer-Aided Design, 43(3), 303-315. [Google Scholar] [CrossRef]
  42. Wu, T., Liu, C. C., & He, C. (2019). Fault diagnosis of bearings based on KJADE and VNWOA-lSSVM algorithm. Mathematical Problems in Engineering, 2019, 1-19. [Google Scholar] [CrossRef]
  43. Ghosh, K. K., Ahmed, S., Singh, P. K., Geem, Z. W., & Sarkar, R. (2020). Improved binary sailfish optimizer based on adaptive -hill climbing for feature selection. IEEE Access, 8, 83548-83560. [Google Scholar] [CrossRef]
  44. Tang, A., Zhou, H., Han, T., & Xie, L. (2021). A chaos sparrow search algorithm with logarithmic spiral and adaptive step for engineering problems. Computer Modeling in Engineering & Sciences, 129(1), 1-34. [Google Scholar] [CrossRef]
  45. Yan, Z., Zhang, J., & Tang, J. (2021). Path planning for autonomous underwater vehicle based on an enhanced water wave optimization algorithm. Mathematics and Computers in Simulation, 181, 192-241. [Google Scholar] [CrossRef]
  46. El-Fergany, A. A. (2021). Parameters identification of PV model using improved slime mould optimizer and Lambert W-function. Energy Reports, 7, 875-887. [Google Scholar] [CrossRef]
  47. Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1), 67-82. [Google Scholar] [CrossRef]
  48. Jia, H., Lang, C., Oliva, D., Song, W., & Peng, X. (2019). Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sensing, 11(12), 1421. [Google Scholar] [CrossRef]
  49. Ding, H., Wu, Z., & Zhao, L. (2020). Whale optimization algorithm based on nonlinear convergence factor and chaotic inertial weight. Concurrency and Computation: Practice and Experience, 32(24), e5949. [Google Scholar] [CrossRef]
  50. Jia, H., & Lang, C. (2021). Salp swarm algorithm with crossover scheme and Lévy flight for global optimization. Journal of Intelligent & Fuzzy Systems, 40(5), 9277-9288. [Google Scholar] [CrossRef]
  51. Abdel-Basset, M., Chang, V., & Mohamed, R. (2020). HSMA_WOA: A hybrid novel slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Applied Soft Computing, 95, 106642. [Google Scholar] [CrossRef]
  52. Liu, C. A., Lei, Q., & Jia, H. (2020). Hybrid imperialist competitive evolutionary algorithm for solving biobjective portfolio problem. Intelligent Automation & Soft Computing, 26(6), 1477-1492. [Google Scholar] [CrossRef]
  53. Dhiman, G. (2019). ESA: A hybrid bio-inspired metaheuristic optimization approach for engineering problems. Engineering with Computers, 37(1), 323-353. [Google Scholar] [CrossRef]
  54. Abdollahzadeh, B., Soleimanian Gharehchopogh, F., & Mirjalili, S. (2021). Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. International Journal of Intelligent Systems, 36(10), 5887-5958. [Google Scholar] [CrossRef]
  55. Ginidi, A., Ghoneim, S. M., Elsayed, A., El-Sehiemy, R., & Shaheen, A. (2021). Gorilla troops optimizer for electrically based single and double-diode models of solar photovoltaic systems. Sustainability, 13(16), 9459. [Google Scholar] [CrossRef]
  56. Duan, Y., Liu, C., & Li, S. (2021). Battlefield target grouping by a hybridization of an improved whale optimization algorithm and affinity propagation. IEEE Access, 9, 46448-46461. [Google Scholar] [CrossRef]
  57. Kaur, G., & Arora, S. (2018). Chaotic whale optimization algorithm. Journal of Computational Design and Engineering, 5(3), 275-284. [Google Scholar] [CrossRef]
  58. Tizhoosh, H. R. (2005). Opposition-based learning: A new scheme for machine intelligence. International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, web Technologies and Internet Commerce, pp. 695–701. Vienna, Austria.
  59. Ouyang, C., Zhu, D., Qiu, Y., & Zhang, H. (2021). Lens learning sparrow search algorithm. Mathematical Problems in Engineering, 2021, 1-17. [Google Scholar] [CrossRef]
  60. Long, W., Wu, T., Tang, M., Xu, M., & Cai, S. (2020). Grey wolf optimizer algorithm based on lens imaging learning strategy. Acta Automatica Sinica, 46(10), 2148-2164. [Google Scholar] [CrossRef]
  61. Al-Betar, M. A., Aljarah, I., Awadallah, M. A., Faris, H., & Mirjalili, S. (2019). Adaptive -hill climbing for optimization. Soft Computing, 23(24), 13489-13512. [Google Scholar] [CrossRef]
  62. Glover, F. (1986). Future paths for integer programming and links to artificial intelligence. Computers & Operations Research, 13(5), 533-549. [Google Scholar] [CrossRef]
  63. Mladenović, N., & Hansen, P. (1997). Variable neighborhood search. Computers & Operations Research, 24(11), 1097-1100. [Google Scholar] [CrossRef]
  64. Long, W., Jiao, J., Liang, X., Wu, T., & Xu, M. (2021). Pinhole-imaging-based learning butterfly optimization algorithm for global optimization and feature selection. Applied Soft Computing, 103, 107146. [Google Scholar] [CrossRef]
  65. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46-61. [Google Scholar] [CrossRef]
  66. Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., & Faris, H. (2017). Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163-191. [Google Scholar] [CrossRef]
  67. García, S., Fernández, A., Luengo, J., & Herrera, F. (2010). Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences, 180(10), 2044-2064. [Google Scholar] [CrossRef]
  68. Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Systems, 89, 228-249. [Google Scholar] [CrossRef]
  69. Kannan, B., & Kramer, S. N. (1994). An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, 116(2), 405-411. [Google Scholar] [CrossRef]
  70. Sandgren, E. (1990). Nonlinear integer and discrete programming in mechanical design optimization. Journal of Mechanical Design, 112(2), 223-229. [Google Scholar] [CrossRef]
  71. Nadimi-Shahraki, M. H., Taghian, S., Mirjalili, S., & Faris, H. (2020). MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Applied Soft Computing, 97, 106761. [Google Scholar] [CrossRef]
  72. Savsani, P., & Savsani, V. (2016). Passing vehicle search (PVS): A novel metaheuristic algorithm. Applied Mathematical Modelling, 40(5), 3951-3978. [Google Scholar] [CrossRef]
  73. Naruei, I., & Keynia, F. (2021). A new optimization method based on COOT bird natural life model. Expert Systems with Applications, 183, 115352. [Google Scholar] [CrossRef]
  74. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Let a biogeography-based optimizer train your multi-layer perceptron. Information Sciences, 269, 188-209. [Google Scholar] [CrossRef]
  75. Melin, P., Sánchez, D., & Castillo, O. (2012). Genetic optimization of modular neural networks with fuzzy response integration for human recognition. Information Sciences, 197, 1-19. [Google Scholar] [CrossRef]
  76. Guo, Z. X., Wong, W. K., & Li, M. (2012). Sparsely connected neural network-based time series forecasting. Information Sciences, 193, 54-71. [Google Scholar] [CrossRef]
  77. Wang, L., Zhang, D., Fan, Y., Xu, H., & Wang, Y. (2021). Multilayer perceptron training based on a Cauchy variant grey wolf optimizer algorithm. Computer Engineering and Science, 43(6), 1131-1140. [Google Scholar] [CrossRef]
  78. Dua, D., Graff, C. (2019). UCI machine learning repository. Irvine, CA: University of California, School of Information and Computer Science http://archive.ics.uci.edu/ml.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.