Metaheuristic algorithms, as effective methods for solving optimization problems, have recently attracted considerable attention in science and engineering fields. They are popular and have broad applications owing to their high efficiency and low complexity. These algorithms are generally based on the behaviors observed in nature, physical sciences, or humans. This study proposes a novel metaheuristic algorithm called dark forest algorithm (DFA), which can yield improved optimization results for global optimization problems. In DFA, the population is divided into four groups: highest civilization, advanced civilization, normal civilization, and low civilization. Each civilization has a unique way of iteration. To verify DFA’s capability, the performance of DFA on 35 well-known benchmark functions is compared with that of six other metaheuristic algorithms, including artificial bee colony algorithm, firefly algorithm, grey wolf optimizer, harmony search algorithm, grasshopper optimization algorithm, and whale optimization algorithm. The results show that DFA provides solutions with improved efficiency for problems with low dimensions and outperforms most other algorithms when solving high dimensional problems. DFA is applied to five engineering projects to demonstrate its applicability. The results show that the performance of DFA is competitive to that of current well-known metaheuristic algorithms. Finally, potential upgrading routes for DFA are proposed as possible future developments.
With the continuing development of modern industry, optimization strategies are becoming increasingly essential; therefore, novel optimization algorithms need to be developed. To date, numerous optimization algorithms have been proposed, with nature-inspired algorithms being some of the most successful algorithms. Compared to traditional optimization methods, such as the gradient descent and direct search methods [
Metaheuristic algorithms are often inspired by observations from natural phenomena. For example, the ant colony optimization (ACO) algorithm is inspired by the foraging behavior of ants [
Most metaheuristic algorithms are characterized by their randomness, communication, exploration, and exploitation. Randomness provides algorithms with the possibility of achieving superior optimization solutions. Communication renders information exchange between individual solutions and thereby enables their learning from each other to yield superior optimization solutions. Exploration provides algorithms with a trail of new ideas or strategies, while exploitation allows algorithms to adopt the techniques that have proven successful in the past [
In this study, a new algorithm called dark forest algorithm (DFA) is proposed based on the rule of superiority and inferiority among natural civilizations and the universe’s dark forest law [
The rest of this paper is organized as follows. Section 2 introduces the literature related to the development and applications of metaheuristic algorithms. Section 3 describes the mathematical model of the proposed DFA and the workflow and pseudo-code of the algorithm. Section 4 compares the performance of DFA with six other metaheuristic algorithms on 35 benchmark functions. Section 5 illustrates three engineering design problems using DFA with discussions on their outcomes. Finally, Section 6 presents the conclusions and suggestions for future research.
Many intricate and fascinating phenomena can be observed in nature that provide inspiration for solving practical problems. The known metaheuristic algorithms can be classified into five categories according to the sources of their inspiration: evolutionary algorithms, swarm intelligence algorithms, physics-based algorithms, human-based algorithms, and other algorithms (
Evolutionary algorithms are based on natural evolution and are the first metaheuristic algorithms proposed. They are the most commonly used. They are based on phenomena and developed theories within biological evolution. Typical algorithms in this category include GA, evolutionary programming [
Swarm intelligence algorithms are constructed by simulating group activities inspired by swarms in nature, such as coenosis or social animals. The so-called group intelligence includes the behaviors of simple individuals and the whole group, exhibiting a specific intelligent feature without centralized control. Typical examples include ACO algorithm, PSO algorithm, artificial bee colony algorithm (ABC) [
Physics-based algorithms are inspired by the physics of matter, and they usually have a solid theoretical basis. Some popular algorithms in this category are SA, central force optimization [
Human-based algorithms are designed based on human activities in the society. For example, teaching-learning-based optimization (TLBO) is inspired by the interaction between a learner’s learning and a teacher’s teaching [
Some algorithms do not fall into any of the above categories but are inspired by other natural phenomena, such as water drops algorithm [
Although many metaheuristic algorithms already exist, new algorithms still need to be designed. According to the No Free Lunch (NFL) theorem [
Civilizations in the universe survive under the dark forest law. According to the dark forest law, a civilization once discovered will inevitably be attacked by other civilizations in the universe. The universe’s evolution is endless and is always accompanied by the extinction of existing civilizations and the birth of new ones. Civilizations plunder each other and cooperate, constantly moving toward a better direction. Civilizations can be classified according to their level of development: highest, advanced, normal, and low civilizations.
Each civilization has its own exploration strategy. The highest civilizations cannot learn from others because no known civilizations are superior to them, and they usually move around during exploration. The highest civilizations sometimes plunder other civilizations for their development. If the highest civilization does not evolve, they remain unchanged. Advanced civilizations learn from the highest civilizations and plunder the normal civilizations. Normal civilizations learn from the highest and advanced civilizations, and they choose which civilizations to learn from based on the strengths and weaknesses of the other civilizations and their distance. Finally, the low civilizations are subject to elimination, at which point newly created civilizations enter the iterations.
This section describes the mathematical model, algorithmic workflow, and pseudo-code of DFA. The general workflow of the algorithm is as follows: 1) randomly initializing the population coordinates in the solution space; 2) classifying civilizations according to the adaptability of each civilization coordinate; 3) iteratively updating all locations according to the corresponding civilization level; and 4) separately performing a refined search for the highest civilization in the last few iterations.
DFA is a population-based algorithm. Similar to other population-based metaheuristic algorithms, it generates several uniformly distributed initial solutions in the search space at the beginning as follows:
All solutions are constructed into a matrix
After the initial population is generated, the population is sorted according to the fitness values of each civilization. The top-ranked civilization is the highest civilization; the civilizations with the top 3/10 of the population, except for the highest civilization, are the advanced civilizations. The civilizations with ranking between 3/10 and 8/10 are defined as the normal civilizations, and the remaining civilizations are defined as low civilizations. Different civilization types update their coordinates in different ways.
Locations of the highest civilizations are updated as follows. The highest civilization has globally optimal fitness, and its primary purpose of movement is to find coordinates with enhanced fitness. During the iteration process, the highest civilization only changes its coordinates when it finds improved fitness. The highest civilization moves randomly in 50% of the cases and takes reference from the advanced civilization for the other 50% of the cases. The update formulas are as follows:
Locations of the advanced civilizations are updated as follows. Advanced civilizations obtain reference from normal civilizations for 20% of the cases. Since the highest civilization has better fitness than the advanced civilization, advanced civilizations that move to the highest civilizations are more likely to obtain superior results. Hence, the probability for advanced civilizations to move to the highest civilization is set as 80%. Advanced civilizations use spiral location updates in most cases, similar to the spiral update in WOA, but with variations. Mathematically, the update formulas are as follows:
Locations of normal civilizations are updated as follows. Normal civilizations also employ spiral position update. The reference is a civilization coordinate obtained via linear ranking and roulette selection, which may be either the highest civilization or an advanced civilization. In the selection process, the probability of each civilization is jointly determined by its fitness and its distance from the current normal civilization, with the weight ratio of fitness to distance length of 4:1. The update formulas are
Locations of low civilizations are updated as follows. Due to the poor adaptation of the low civilizations themselves, the coordinates of the low civilizations are reset for 50% of the cases. This is reasonable as the primary responsibility of a low civilization is to maintain the population diversity to ensure that it does not fall into a local optimum. A new coordinate is updated by mapping the coordinate centroid of mass of all the highest and advanced civilizations for 50% of the cases. The update formulas are
After all civilization locations are updated, one refined search in the last ten iterations is performed for the highest civilizations. The refined search is performed to update each component
The algorithm ends after a specified number of iterations is reached, with the coordinates recorded by the highest civilization at the end being the optimal solution and the corresponding fitness being the optimal fitness. The pseudo-code of DFA algorithm is presented in
1: |
3: Compute the fitness of each solution |
4: |
8: Update the position by |
9: |
10: |
11: Update the position by |
13: |
14: Update the position by |
15: |
16: |
17: Update the position by |
18: |
19: Check if any search solution goes beyond the given search space and then adjust it |
20: Compute the fitness of each solution |
21: |
22: |
23: Refine search of best solution by |
In this section, the results of DFA on 35 benchmark functions are shown.
Function name | Equation | D | Range | |
Goldstein-Price | 2 | [−2, 2] | 3 | |
Branin | 2 | 0.397887357 | ||
Bohachevsky 1 | 2 | [−50, 50] | 0 | |
Easom | 2 | [−10, 10] | −1 | |
Beale | 2 | [−4.5, 4.5] | 0 | |
Bartels Conn | 2 | [−500, 500] | 1 | |
Shekel’s Foxholes | 2 | [−65.536, 65.536] | 0.9980038378 | |
Six-Hump Camel-Back | 2 | [−5, 5] | −1.031628453 | |
Michalewicz | 2 | [0, |
−0.801303410 | |
Schaffer | 2 | [−100, 100] | 0 | |
Drop Wave | 2 | [−5.12, 5.12] | −1 | |
Shubert | 2 | [−10, 10] | −186.7309 | |
Bird | 2 | [−2 |
−106.7645367 | |
Sphere | 30 | [−100, 100] | 0 | |
Schwefel P2.22 | 30 | [−10, 10] | 0 | |
Rosenbrock | 30 | [−30, 30] | 0 | |
Quartic (De-Jong) | 30 | [−1.28, 1.28] | 0 | |
Schwefel P1.2 | 30 | [−100, 100] | 0 | |
Zakharov | 30 | [−5, 10] | 0 | |
Elliptic (Ellipsoid) | 30 | [−100, 100] | 0 | |
Rastrigin | 30 | [−5.12, 5.12] | 0 | |
Griewank | 30 | [−600, 600] | 0 | |
Alpine | 30 | [−10, 10] | 0 | |
Levy and Montalvo 1 | 30 | [−10, 10] | 0 | |
Levy and Montalvo 2 | 30 | [−5, 5] | 0 | |
Xin-She Yang 6 | 30 | [−10, 10] | −1 | |
Salomon | 30 | [−100, 100] | 0 | |
Sinusoidal | 30 | [0, |
−3.5 | |
Schwefel P2.26 | 30 | [−500, 500] | −418.98288D | |
Kowalik | 4 | [−5, 5] | 0.0003074861 | |
Hartmann 3 | 3 | [0, 1] | −3.862782148 | |
Shekel 7 | 4 | [0, 10] | −10.1527 | |
Paviani | 10 | [2.001, 9.999] | −45.7784684 | |
Powell’s Quartic | 4 | [−10, 10] | 0 | |
Colville | 4 | [−10, 10] | 0 |
For the abovementioned metaheuristic algorithms, the same initialization process as DFA is employed. To reduce the effect of randomness on the test results, all algorithms on the benchmark function are executed in 30 independent runs with 500 iterations per run. The average values and standard deviations are obtained after the evaluation of the algorithms’ performance.
Benchmark functions F1–F6 are single-peaked functions since they have only one global optimum. They are mainly used to assess the exploitation capability of metaheuristic algorithms.
DFA | ABC | FA | GWO | HS | GOA | WOA | ||
---|---|---|---|---|---|---|---|---|
F1 | Mean | 3.00 | 3.016132 | 3.000004 | 3.000028 | 8.400001 | 3.00 | 3.00 |
Std. | 1.30E−15 | 0.026696 | 7.94E−06 | 5.48E−05 | 11.16211 | 1.26E−12 | 5.11E−15 | |
F2 | Mean | 0.397887 | 0.398291 | 0.397888 | 0.397889 | 0.397887 | 0.397887 | 0.397887 |
Std. | 4.30E−16 | 5.59E−4 | 1.85E−06 | 5.42E−06 | 7.89E−10 | 2.77E−14 | 1.16E−15 | |
F3 | Mean | 0.00 | 0.231584 | 2.37E−05 | 0.00 | 5.01E−05 | 2.36E−11 | 0.00 |
Std. | 0.00 | 0.2265062 | 5.45E−05 | 0.00 | 7.89E−05 | 2.73E−11 | 0.00 | |
F4 | Mean | −1.00 | −0.99941 | −1.00 | −1.00 | 0.9667 | −1.00 | −1.00 |
Std. | 0.00 | 1.20E−03 | 9.68E−07 | 1.29E−05 | 2.42E−01 | 2.63E−13 | 3.93E−17 | |
F5 | Mean | 0.00 | 0.00041 | 1.37E−07 | 0.101609 | 0.065531 | 0.127012 | 0.00 |
Std. | 0.00 | 7.54E−04 | 3.05E−07 | 2.77E−01 | 2.38E−01 | 2.97E−01 | 0.00 | |
F6 | Mean | 1.00 | 3.478734 | 4370.4147 | 1.00 | 1.116218 | 1.00 | 1.00 |
Std. | 0.00 | 3.964094 | 6136.6486 | 0.00 | 1.13E−01 | 1.78E−06 | 7.85E−17 |
Benchmark functions F7–F13 are multi-peaked functions that have many local optima. The multimodal function can well detect the exploration ability of metaheuristic algorithms. Poor performing metaheuristic algorithms can be easily trapped in the local optimal values in the function and thus will not yield global optimization results.
DFA | ABC | FA | GWO | HS | GOA | WOA | ||
---|---|---|---|---|---|---|---|---|
F7 | Mean | 0.998004 | 0.999009 | 0.998004 | 2.967961 | 0.998004 | 1.031138 | 1.527099 |
Std. | 2.78E−17 | 2.57E−03 | 7.14E−07 | 2.918416 | 3.90E−11 | 2.41E−01 | 9.55E−01 | |
F8 | Mean | −1.031628 | −1.030659 | −1.031628 | −1.031628 | −1.031628 | −1.031628 | −1.031628 |
Std. | 5.81E−16 | 1.56E−03 | 894E−07 | 1.87E−08 | 1.44E−09 | 2.36E−13 | 4.68E−16 | |
F9 | Mean | −1.801303 | −1.800267 | −1.801303 | −1.801301 | −1.801303 | −1.755019 | −1.715869 |
Std. | 8.90E−16 | 1.47E−03 | 2.46E−06 | 6.03E−06 | 2.00E−10 | 2.15E−01 | 2.48E−01 | |
F10 | Mean | 0.002456 | 0.003090 | 0.002456 | 0.002456 | 0.014192 | 0.002456 | 0.009323 |
Std. | 0.00 | 4.60E−03 | 2.27E−08 | 2.69E−08 | 1.93E−02 | 4.66E−14 | 1.44E−02 | |
F11 | Mean | −0.997875 | −0.971810 | −0.999995 | −0.980874 | −0.905380 | −1.00 | −0.970248 |
Std. | 1.54E−02 | 2.80E−02 | 1.55E−05 | 2.96E−02 | 7.10E−02 | 1.35E−12 | 3.18E−02 | |
F12 | Mean | −186.7309 | −186.6006 | −186.7124 | −186.7198 | −186.7309 | −186.7309 | −186.7309 |
Std. | 2.13E−14 | 2.63E−01 | 4.44E−02 | 4.32E−02 | 1.48E−05 | 5.71E−10 | 3.26E−14 | |
F13 | Mean | −106.7645 | −106.7416 | −106.7645 | −106.7645 | −106.7645 | −104.1707 | −106.1161 |
Std. | 4.63E−14 | 2.99E−02 | 2.33E−04 | 1.73E−04 | 1.70E−07 | 7.06E+00 | 4.71E+00 |
Compared to F1–F6, benchmark functions F14–FF20 have increased dimensionality from 2 to 30 dimensions, and thus, their difficulty of exploitation is dramatically higher.
DFA | ABC | FA | GWO | HS | GOA | WOA | ||
---|---|---|---|---|---|---|---|---|
F14 | Mean | 0.092286 | 473.31073 | 45712.240 | 3.06E−84 | 0.541061 | 6204.1981 | 3.01E−98 |
Std. | 5.96E−02 | 6.12E+02 | 1.07E+04 | 1.69E−83 | 3.82E−01 | 3.59E+03 | 1.55E−97 | |
F15 | Mean | 0.461559 | 2.681364 | 1.69E+12 | 1.54E−54 | 0.197023 | 69.192744 | 5.36E−61 |
Std. | 3.67E−01 | 1.97E+00 | 8.32E+12 | 4.56E−54 | 1.07E−01 | 3.66E+01 | 3.63E−60 | |
F16 | Mean | 289.0261 | 613.0397 | 2.40E+08 | 28.63934 | 333.6808 | 2665196.3 | 27.61793 |
Std. | 2.51E+02 | 8.65E+02 | 4.15E+07 | 1.55E−01 | 6.60E+02 | 2.39E+06 | 5.96E−01 | |
F17 | Mean | 0.098006 | 0.773343 | 113.1723 | 0.000606 | 0.113039 | 3.053331 | 0.002122 |
Std. | 4.52E−02 | 7.81E−01 | 1.65E+01 | 5.14E−04 | 4.86E−02 | 1.94E+00 | 2.49E+03 | |
F18 | Mean | 2471.722 | 5467.711 | 77849.98 | 8727.620 | 3786.828 | 9466.5033 | 33888.18 |
Std. | 8.14E+02 | 2.17E+03 | 2.33E+04 | 3.27E+03 | 1.16E+03 | 5.05E+03 | 8.87E+03 | |
F19 | Mean | 48.6756 | 266.4841 | 963.9342 | 133.7175 | 57.1907 | 258.12907 | 489.6096 |
Std. | 2.05E+01 | 4.34E+01 | 6.46E+01 | 6.75E+01 | 2.12E+01 | 1.02E+02 | 1.07E+02 | |
F20 | Mean | 0.023375 | 12.942495 | 984.33553 | 1.86E−87 | 0.002295 | 32.495835 | 5.38E−99 |
Std. | 2.56E−02 | 2.08E+01 | 3.88E+02 | 9.79E−87 | 2.08E−03 | 2.40E+01 | 3.74E−98 |
With increasing dimensions, the number of optima in the multimodal function exponentially increases. The optimization results of F21–F29 in
DFA | ABC | FA | GWO | HS | GOA | WOA | ||
---|---|---|---|---|---|---|---|---|
F21 | Mean | 0.000411 | 1.550200 | 167.22363 | 0.00 | 0.005283 | 200.2107 | 8.29E−16 |
Std. | 2.82E−04 | 3.52E+00 | 2.12E+01 | 0.00 | 3.44E−03 | 6.42E+01 | 1.93E−15 | |
F22 | Mean | 0.317133 | 175.69147 | 594.55982 | 0.000338 | 1.059683 | 62.931834 | 0.002985 |
Std. | 1.06E−01 | 2.28E+01 | 7.17E+01 | 2.46E−03 | 2.43E−02 | 3.06E+01 | 6.64E−03 | |
F23 | Mean | 0.360416 | 0.920308 | 62.836655 | 2.20E−54 | 0.015389 | 24.326460 | 15.101463 |
Std. | 3.53E−01 | 5.43E−01 | 5.93E+00 | 1.03E−53 | 1.18E−02 | 6.25E+00 | 1.32E+01 | |
F24 | Mean | 0.020931 | 0.064906 | 22.508559 | 0.039639 | 6.34E−05 | 4.132649 | 1.129130 |
Std. | 5.80E−02 | 1.25E−01 | 3.83E+00 | 1.81E−02 | 7.11E−05 | 2.14E+00 | 1.85E+00 | |
F25 | Mean | 0.011110 | 0.181606 | 4.894935 | 0.232352 | 0.001197 | 2.444549 | 0.369730 |
Std. | 1.05E−02 | 2.99E−01 | 9.85E−01 | 1.32E−01 | 2.41E−03 | 1.25E+00 | 4.82E−01 | |
F26 | Mean | 5.17E−16 | 1.76E−13 | 7.89E−08 | −0.066666 | 6.01E−16 | 2.43E−11 | 5.35E−13 |
Std. | 4.65E−16 | 9.66E−14 | 9.06E−08 | 2.92E−01 | 5.62E−16 | 3.10E−11 | 5.25E−13 | |
F27 | Mean | 2.009873 | 13.113274 | 22.730518 | 0.093215 | 1.780434 | 9.256540 | 0.239882 |
Std. | 3.14E−01 | 1.82E+00 | 3.04E+00 | 2.91E−02 | 2.99E−01 | 1.89E+00 | 1.39E−01 | |
F28 | Mean | −3.498860 | −1.060149 | −0.012483 | −1.272104 | −2.083275 | −0.090718 | −3.149778 |
Std. | 7.94E−04 | 6.29E−01 | 6.58E−02 | 4.85E−01 | 1.24E+00 | 1.58E−01 | 5.92E−01 | |
F29 | Mean | −12077.65 | −7918.84 | −2253.11 | −8623.52 | −12544.84 | −6294.25 | −8926.81 |
Std. | 4.45E+02 | 6.00E+02 | 5.28E+02 | 1.01E+03 | 3.47E+01 | 4.29E+02 | 1.01E+03 |
Fixed-dimensional functions usually comprise several low-dimensional functions, which greatly test the exploitation and exploration abilities of metaheuristic algorithms. Metaheuristic algorithms that balance the exploitation and exploration capabilities could obtain optimization results with better chance.
DFA | ABC | FA | GWO | HS | GOA | WOA | ||
---|---|---|---|---|---|---|---|---|
F30 | Mean | 0.000426 | 0.001370 | 0.001605 | 0.000839 | 0.010827 | 0.005892 | 0.000558 |
Std. | 2.95E−04 | 5.74E−04 | 5.32E−04 | 2.06E−03 | 1.79E−02 | 8.89E−03 | 6.12E−04 | |
F31 | Mean | −3.862782 | −3.861855 | −3.862654 | −3.862318 | −3.862651 | −3.862782 | −3.862782 |
Std. | 2.49E−15 | 1.00E−03 | 3.42E−04 | 2.24E−03 | 9.51E−04 | 2.29E−13 | 1.90E−15 | |
F32 | Mean | −8.312070 | −7.740637 | −6.395885 | −9.572651 | −5.361369 | −5.553447 | −6.807390 |
Std. | 3.10E+00 | 2.30E+00 | 1.58E+00 | 1.89E+00 | 3.27E+00 | 3.38E+00 | 3.52E+00 | |
F33 | Mean | −45.77847 | −45.05114 | −19.19658 | −40.97316 | −45.77847 | −42.02 | −45.77846 |
Std. | 2.20E−10 | 1.22E+00 | 5.66E+00 | 3.06E+00 | 2.45E−08 | 4.14E+00 | 5.82E−05 | |
F34 | Mean | 0.001695 | 0.002405 | 0.037276 | 0.001066 | 0.003754 | 1.32E−05 | 0.001757 |
Std. | 6.01E−03 | 2.76E−03 | 6.70E−02 | 3.26E−03 | 6.64E−03 | 2.60E−05 | 3.55E−03 | |
F35 | Mean | 5.206178 | 23.54695 | 16.389712 | 10.087391 | 108.71767 | 0.780104 | 23.194003 |
Std. | 2.00E+01 | 3.84E+01 | 2.19E+01 | 3.43E+01 | 1.76E+02 | 2.17E+02 | 1.21E+02 |
Overall, DFA shows best performance compared to other algorithms in the low-dimensional single-peaked, low-dimensional multi-peaked, and fixed-dimensional functions; and also achieves better results than most algorithms in the high-dimensional functions.
Although the optimization results show that DFA exhibits better overall performance than other metaheuristic algorithms, the superior performance of DFA needs to be demonstrated in statistical analysis. This section uses the classic statistical analysis method, i.e., the rank-sum ratio (RSR) [
Algorithm | RSR ranking | Probit | RSR fitted values | Grading level |
---|---|---|---|---|
DFA | 1 | 6.802743090739191 | 0.9912724183765839 | 6 |
GWO | 2 | 6.067570523878141 | 0.8917274765228165 | 5 |
WOA | 3 | 5.565948821932863 | 0.8238061402222359 | 5 |
ABC | 4 | 5.1800123697927045 | 0.7715489921466842 | 4 |
HS | 5 | 4.819987630207295 | 0.722800380754713 | 4 |
FA | 7 | 3.9324294761218583 | 0.6026218963785807 | 3 |
GOA | 6 | 4.434051178067137 | 0.6705432326791614 | 3 |
In this section, DFA is applied to five constrained engineering design problems: welded beam design, pressure vessel design, three-bar truss design, compression spring design, and cantilever beam design. All the algorithms are tested for each engineering project in 30 independent runs with 500 iterations per run and the best value is taken as the final optimization result.
Welded beam design is a common engineering optimization problem where the objective is to find the optimum length of the clamped bar
Subject to
Variable range
The constraints and their associated constants are expressed as follows:
The results of DFA are compared with those of the other algorithms, and the data in
Algorithm | Optimum variables | Optimum cost | |||
---|---|---|---|---|---|
h | l | t | b | ||
DFA | 0.2053263 | 3.47659665 | 9.04334824 | 0.20569611 | 1.72595568 |
FA | 0.26841149 | 3.15279017 | 7.65984043 | 0.29365304 | 2.10712421 |
ABC | 0.19403412 | 3.66950515 | 9.22029126 | 0.2075474 | 1.77937345 |
GWO | 0.19870343 | 3.64395848 | 9.00965781 | 0.20696315 | 1.74176404 |
HS | 0.25534331 | 3.12713781 | 7.63445357 | 0.28824088 | 2.03847263 |
GOA | 0.18912920 | 3.85154748 | 9.03662298 | 0.20572969 | 1.74982934 |
WOA | 0.18719115 | 7.44065593 | 8.89750281 | 0.2122135 | 2.23569138 |
DFA is employed for the pressure vessel design problem. This design aims to find the appropriate shell
Subject to
Variable range
Algorithm | Optimum variables | Optimum cost | |||
---|---|---|---|---|---|
R | L | ||||
DFA | 12.68830574 | 6.27191504 | 41.0895908 | 189.55214995 | 5911.15146639 |
ABC | 12.70506601 | 6.27994045 | 41.13437792 | 188.9686432 | 5914.386945161 |
FA | 15.06349385 | 7.4636159 | 48.13021892 | 114.0638514 | 7810.28453922 |
GWO | 13.32095422 | 6.82361653 | 43.10752296 | 164.5239233 | 6038.11905684 |
HS | 13.19592313 | 6.52452479 | 42.72899153 | 169.00003781 | 5971.21817933 |
GOA | 12.88471723 | 6.36890596 | 41.72542605 | 181.31441634 | 5933.29373914 |
WOA | 13.4869777 | 6.66662006 | 43.67544592 | 158.02818082 | 6005.51170785 |
The three-bar truss design is the third case study employed. It aims to evaluate the optimum cross-sectional areas
Subject to
Variable range
The statistical results of DFA and the other algorithms for the three-bar truss design problem are shown in
Algorithm | Optimum variables | Optimum volume | |
---|---|---|---|
DFA | 0.78875500 | 0.40802178 | 263.89578211 |
ABC | 0.78873793 | 0.40811035 | 263.89980965 |
FA | 0.78868058 | 0.40823232 | 263.89578701 |
GWO | 0.78856879 | 0.40855575 | 263.89651005 |
HS | 0.78764908 | 0.41115752 | 263.89655385 |
GOA | 0.78881723 | 0.40784588 | 263.89579234 |
WOA | 0.7883832 | 0.40907398 | 263.89584001 |
Compression spring design aims to minimize the mass f(x) under certain constraints, including four inequality constraints of minimum deflection, shear stress, surge frequency, and deflection. Three design variables are present: the mean coil diameter (D), wire diameter (d), and number of active coils (N).
Subject to
Variable range
The statistical results of DFA and the other algorithms for the three-bar truss design problem are shown in
Algorithm | Optimum variables | Optimum mass | ||
---|---|---|---|---|
N | ||||
DFA | 0.05180635 | 0.35954649 | 11.12500622 | 0.012665467 |
ABC | 0.05227057 | 0.35384607 | 12.16672679 | 0.013696147 |
FA | 0.05 | 0.31667258 | 14.24697868 | 0.01286243 |
GWO | 0.05298374 | 0.38866847 | 9.65846781 | 0.01272055 |
HS | 0.05581944 | 0.46445987 | 6.95555321 | 0.0129601920 |
GOA | 0.05362217 | 0.40503082 | 8.93195867 | 0.012731378 |
WOA | 0.05126438 | 0.34658734 | 11.90853585 | 0.012668523 |
Cantilever beam design is a structural engineering design problem related to the weight optimization of the square section cantilever. The cantilever beam is rigidly supported at one end, and a vertical force acts on the free node of the cantilever. The beam comprises five hollow square blocks of constant thickness, the height (or width) of which is the decision variable and the thickness is fixed.
Subject to
Variable range
The statistical results of DFA and the other algorithms for the cantilever beam design problem are shown in
Algorithm | Optimum variables | Optimum weight | ||||
---|---|---|---|---|---|---|
DFA | 5.96871891 | 5.35265306 | 4.53393307 | 3.475844 | 2.14508805 | 1.34011719 |
ABC | 6.34433249 | 5.25405738 | 4.83394417 | 3.12836834 | 2.0932221 | 1.35120489 |
FA | 5.98319073 | 5.38116037 | 4.44248883 | 3.44473814 | 2.26317699 | 1.34302424 |
GWO | 6.18084273 | 5.11890478 | 4.66205867 | 3.36104363 | 2.19997452 | 1.34302424 |
HS | 6.11954396 | 5.39091533 | 4.42759653 | 3.46660407 | 2.08273114 | 1.34081320 |
GOA | 6.01598408 | 5.30917278 | 4.4945209 | 3.50137541 | 2.15260648 | 1.33995636 |
WOA | 9.53170142 | 4.20385935 | 3.62127756 | 12.87413556 | 3.28738588 | 2.091545650 |
This study presented a novel metaheuristic algorithm called DFA. The effectiveness of DFA was validated on 35 well-known benchmark functions and compared with that of six well-known metaheuristic algorithms. The optimization capabilities of DFA were examined in terms of exploitation capability, statistical analyses, and convergence analyses. The results indicate that DFA is a competitive metaheuristic algorithm with outstanding performance in terms of global optimization problems.
DFA was also applied to five engineering design problems for verification in practical applications (i.e., welded beam, pressure vessel, three-bar truss, compression spring, and cantilever beam design problems). DFA outperforms the other six metaheuristic algorithms in the chosen evaluation criteria. In subsequent studies, DFA will be used to improve the efficiency of machine learning potential development for Fe–Cr–Al ternary alloys.
This work is performed under collaboration with College of Materials Science and Chemical Engineering, Harbin Engineering University by the support of
The authors declare that they have no conflicts of interest to report regarding the present study.