iconOpen Access

ARTICLE

crossmark

Large-Scale Multi-Objective Optimization Algorithm Based on Weighted Overlapping Grouping of Decision Variables

Liang Chen1, Jingbo Zhang1, Linjie Wu1, Xingjuan Cai1,2,*, Yubin Xu1

1 Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, Taiyuan University of Science and Technology, Taiyuan, 030024, China
2 School of State Key Laboratory of Novel Software Technology, Nanjing University, Nanjing, 210008, China

* Corresponding Author: Xingjuan Cai. Email: email

Computer Modeling in Engineering & Sciences 2024, 140(1), 363-383. https://doi.org/10.32604/cmes.2024.049044

Abstract

The large-scale multi-objective optimization algorithm (LSMOA), based on the grouping of decision variables, is an advanced method for handling high-dimensional decision variables. However, in practical problems, the interaction among decision variables is intricate, leading to large group sizes and suboptimal optimization effects; hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables (MOEAWOD) is proposed in this paper. Initially, the decision variables are perturbed and categorized into convergence and diversity variables; subsequently, the convergence variables are subdivided into groups based on the interactions among different decision variables. If the size of a group surpasses the set threshold, that group undergoes a process of weighting and overlapping grouping. Specifically, the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables. The decision variable with the highest interaction in the group is identified and disregarded, and the remaining variables are then reclassified into subgroups. Finally, the decision variable with the strongest interaction is added to each subgroup. MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups, which contributed to the optimized direction of convergence and diversity exploration with different groups. MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems, and the experimental results demonstrate the effectiveness of our methods. Compared with the other algorithms, our method is still at an advantage.

Keywords


1  Introduction

Large-Scale Multi-Objective Optimization Problems (LSMOPs), applied in territories like engineering design and logistics [14], share the conflicting objectives characteristic of Multi-Objective Optimization Problems (MOPs) [57]. However, their high-dimensional decision variables necessitate specialized strategies for effective resolution. In addressing MOPs, the optimization of one objective frequently results in the degradation of others. Multi-Objective Evolutionary Algorithms (MOEAs) address this by finding a group of non-dominated solutions, effectively balancing these objectives to meet various requirements [8,9]. For a minimizing problem that involves m objectives, it is symbolically represented as F(X)=min(F1(X),F2(X),,Fn(X)),F(X1)<F(X2), then it is called that X1 dominates X2 [10]. If at least one objective in X1 and X2 is such that neither can be superior to the other, then X1 and X2 are considered non-dominated solutions [1113]. This implies that each solution has its unique advantages, necessitating the retention of all such solutions [1416]. Therefore, the fundamental goal of MOPs is to find a set of mutually non-dominated Pareto solutions. MOEAs like nondominated sorting genetic algorithm III (NSGA-III) [17] have shown effectiveness in these problems [1821]. However, the decision variables in engineering design, genetic biology, and other fields usually reach thousands or even hundreds of thousands recently [2224]. These problems are characterized by high nonlinearity, complexity, and uncertainty, which make the traditional optimization algorithms inefficient and perform degradation, and present a great challenge [11,25,26].

MOPs that encompass more than 100 dimensions of decision variables are referred to as LSMOPs [27]. Algorithms addressing LSMOPs must consider how the substantial dimensionality of decision variables impacts problem-solving [28]. This is crucial because the decision space expands exponentially with high dimensionality, rendering existing MOEAs less effective in solving LSMOPs [29]. Traditional MOEAs are generally committed to an excellent solution selection strategy, i.e., selecting the better and more appropriate solution when multiple individuals are at the same Pareto level during the evolutionary iteration process. However, the conventional MOEAs fail to construct a more excellent individual in the LSMOPs, and the huge decision space cannot be explored through basic operations like crossover and mutation, which often results in the algorithms finding suboptimal solutions within a limited local space [3032]. In environmental selection among poor-quality suboptimal solutions, the chosen individuals usually provide no significant aid in problem-solving [18]. Consequently, this situation compels researchers to develop exceptional individuals that are capable of guiding the evolution of the entire population and explore the expansive decision space. Currently, LSMOAs are typically categorized into three broad groups [1].

The first category encompasses MOEAs that are based on the reduction of decision space, primarily concentrating on reducing the dimensionality of decision variables. This approach transforms LSMOPs into more manageable, non-large-scale optimization tasks [33]. It is crucial that offspring generated from reduced-dimension parent variables can be mapped back to the initial decision space for function evaluation, facilitating optimization without navigating the high-dimensional space. However, this approach often leads to local optima rather than global optima [34]. Common strategies here involve problem transformation and dimensionality reduction.

The second category involves MOEAs with novel search strategies, distinct from decision space partitioning and dimensionality reduction [27]. Recent MOEAs address LSMOPs directly through innovative search strategies that enhance traditional MOEA operators, including crossover and mutation [35]. These approaches focus on generating superior offspring without complex manipulation of the vast decision space [3638]. Typical strategies include new crossover operators and innovative probabilistic models. However, while typically effective for specific problems, these algorithms often lack generalizability.

The third category comprises MOEAs that focus on decision variable grouping, employing a classic divide-and-conquer strategy to tackle LSMOPs [39,40]. Given the abundance of decision variables in LSMOPs, specific strategies for grouping and alternately optimizing each group are widely applied. Common strategies encompass random grouping, differential grouping, and decision variable analysis.

Currently, most of the grouping-based evolutionary algorithms are random grouping, which divides the decision variables into several groups of equal size [41]. This random partitioning strategy increases the probability of simultaneous optimization of interacting variables and makes the population converge faster [42]. However, it completely fails to take into account the interaction between decision variables and easily leads to local optima. The differential grouping strategy analyzes the relationship between the decision variables and achieves better convergence, but fails to take into account the diversity of the population at the same time [43]. Hence, a decision variable analysis (DVA) approach is used by MOEA/DVA to categorize decision variables into distance, position, and mixed variables [39]. A more precise grouping method is employed to differentiate diversity variables from convergence variables, while the DVA is utilized to address the challenge of insufficient population diversity within the differential grouping method [44].

However, the size of the subgroups remains large due to the intricacy of the decision variables in LSMOPs. For example, LSMOPs with 1000 dimensions of decision variables are grouped in such a way that there are still more than 100 decision variables in a group, in which case the subproblem is still an LSMOPs. Reference [40] proposed the concept of overlapping grouping, which divides the oversized group into multiple subgroups that can contain the same decision variables, so that the group size can be reduced while taking into account that the decision variables with direct interaction relationships can be optimized at the same time. However, the decision variables with the highest number of direct interaction relationships are chosen for division during the process of overlapping grouping, but this approach overlooks the intensity of these interaction relationships. The determination of interaction relationships is random and may have a large error with the real situation between decision variables. Based on these problems, this paper proposes MOEAWOD with the following contributions:

•   A weighted overlapping grouping strategy enhances decision variable grouping effectiveness. This strategy involves subdividing larger groups after interaction analysis on convergent variables, using interaction frequency and targets as subdivision weights. This method strengthens intra-group interactions and weakens inter-group ones, showing better effectiveness in experiments.

•   For optimizing post-grouped decision variables, directional information is used to guide non-dominated solutions toward more promising directions. Diversity is explored when directed by two non-dominated individuals, and convergence is expanded when directed by both non-dominated and dominated individuals.

•   The experimental findings reveal that the convergence and diversity of the proposed MOEAWOD algorithm surpass those of other algorithms.

This paper is structured as follows: The related work is shown in Section 2. Section 3 elaborates on the specifically proposed MOEAWOD. Section 4 discusses the experimental process, results, and analysis. Finally, Section 5 concludes the paper and outlines prospects for future research.

2  Related Work

One straightforward method to address LSMOPs involves employing the divide-and-conquer concept. This approach entails grouping decision variables and optimizing them alternately. LSMOAs based on decision variable grouping are typically classified into three categories: random grouping, differential grouping, and decision variable analysis.

The random grouping method enhances the probability of simultaneous optimization of interacting variables, resulting in expedited population convergence [41]. Antonio et al. [42] introduced a MOEA named Cooperative Coevolutionary Generalized Differential Evolution 3 (CCGDE3) to tackle LSMOPs, incorporating randomization within a cooperative coevolutionary framework. It builds on the generalized differential evolution 3 (GDE3), and decision variables are randomly partitioned into subgroups of equal size to increase the probability of optimizing interactive variables concurrently. However, its performance is susceptible to the predetermined determination of population number and grouping mode, potentially leading to a decline in algorithmic effectiveness. Song et al. [45] introduced a stochastic coevolutionary dynamic grouping strategy, yielding favorable test results. Relying solely on randomness to enhance the probability of simultaneous optimization of interacting variables may result in local optimization of the population. Additionally, the size of the group can also influence the evolutionary effectiveness of the population.

To avoid this situation, Cao et al. [43] incorporated a differential grouping strategy to identify interactions among decision variables. This strategy resolves the interaction problem by simultaneously optimizing interrelated decision variables, thus fostering a closer alignment between theoretical and practical optimization. Nevertheless, the performance improvement of the differential grouping strategy comes at the expense of increased computational complexity, as it requires analyzing the interaction between decision variables in pairs. To mitigate computational demands, Li et al. [46] implemented a fast interdependent identification method that selectively ignored certain unnecessary checks between decision variables. While the differential grouping technique enhances the convergence of the population, it overlooks population diversity in LSMOPs. This oversight may result in the evolution of the population towards a solution set characterized by overly singular traits.

To circumvent the issue of poor diversity, Ma et al. [39] proposed MOEA/DVA, which uses a grouping method of DVA to treat convergence variables and diversity variables separately. Within this approach, variables are categorized into position variables, distance variables, and mixed variables. Following an interactive analysis, decision variables that notably influence convergence are grouped for joint optimization. Subsequently, variables influencing diversity are optimized independently, yielding improved results. Although MOEA/DVA is very effective in dealing with LSMOPs, it is computationly costly due to the evaluation of the interaction of a wide range of decision variables. Moreover, due to the particularity of large-scale, the interaction between decision variables is intricate, and the grouping after DVA processing may still be too large, which will still be an LSMOPs.

To address the issue of declining evolutionary performance in large groups, we propose a MOEAWOD based on DVA to conduct weighted overlapping grouping for such instances. Following the interaction analysis, the most interactive decision variables are allocated to different groups as overlapping parts. This enables the simultaneous optimization of interacting decision variables, with the added advantage that the size of the group does not influence the evolutionary performance of the population.

3  Proposed MOEAWOD

3.1 Motivation

Random grouping of decision variables fails to account for their interactive relationships, depending solely on randomness to enhance the probability of jointly optimizing interacting variables [41]. This oversight might result in the population towards local optima within the decision space [43]. The interaction between decision variables is characterized by how variables xi and xj jointly influence the behavior of the objective value, such as its monotonicity or convexity [47,48]. In summary, xi alone cannot determine the directional change of the objective function f(x), the value of xj must also be considered. Formally, an interaction between variables xi and xj implies at least one set of values a1,a2,b1,b2 that meet specific conditions [46].

{f(X)|xi=a2,xj=b1<f(X)|xi=a1,xj=b1f(X)|xi=a1,xj=b2<f(X)|xi=a2,xj=b2(1)

This illustrates that optimizing decision variables i and j together in one group can more accurately and efficiently optimize the original objective function.

Current group-based LSMOAs primarily utilize random and differential grouping. However, these methods do not effectively balance solution diversity and convergence. Consequently, decision variable analysis-based strategies are used to achieve more effective grouping.

In LSMOPs characterized by a high number of decision variables, even after categorizing them into convergent and diverse types, the resulting groups from differential grouping may still be excessively large. For example, when the LSMOPs problem exceeds 1000 dimensions, the decision variables are intricately related, and the group size obtained could be larger than 100, which is still an LSMOPs and remains difficult to process. Addressing these challenges, this study proposes a weighted overlapping grouping strategy. For larger groups, weighted overlapping is implemented, distributing highly interactive variables across subgroups to maximize intra-group and minimize inter-group interactions.

3.2 The Proposed Overall Framework of MOEAWOD

Algorithm 1 and Fig. 1 comprehensively describe the MOEAWOD framework. Initially, decision variables undergo control analysis, where each position is fixed except one, to determine its influence on the convergence of objective function, categorizing them into convergent and diverse variables. Then interaction analysis between pairs of convergent variables to assess their combined effect on the objective. Subsequently, interacting variables are grouped using a weighted overlapping approach for simultaneous optimization. Finally, the grouped variables are alternately optimized.

images

Figure 1: Algorithm framework diagram

images

After initializing the population for LSMOPs (the parameters are set in the experimental setup section), decision variables undergo perturbation analysis. Each variable is fixed and individually perturbed within its space, creating a set of individuals differing only in the perturbed position. Diversity variables (DV), not affecting convergence, are randomly grouped for optimization, whereas convergent variables undergo weighted overlapping grouping. LSMOPs are transformed into manageable non-large-scale tasks for collaborative optimization after grouping.

3.3 Method of Weighted Overlapping Grouping of Decision Variables

After identifying interactions between pairs of convergent decision variables, those with direct interactions are grouped. If a group becomes too large, a weighted overlapping grouping strategy is applied for further division. To ascertain interactions between pairs of convergent variables, an individual is randomly selected to study the interaction between positions xi and xj, with random values assigned within their decision spaces. In the original population, the value of individual xk is given by xik=a1,xjk=b1. Values a2 and b2 are randomly selected within the bounds of xi and xj. Subsequently, calculate the function value F(xk)|xi=a2, F(xk)|xj=b2, F(xk)|xi=a2,xj=b2, where F(xk)|xi=a2,xj=b2=F(x1k,,xi1k,a2,x(i+1)(kk),,xj1k,b2,xj+1k,...,xnk). The correlation between decision variables xi and xj concerning the objective function F is assessed through an interaction judgment formula, which is expressed as follows:

ΔiF|xj=b1F(Xk)|xi=a2,xj=b1F(Xk)|xi=a1,xj=b1ΔiF|xj=b2F(Xk)|xi=a2,xj=b2F(Xk)|xi=a1,xj=b2(2)

Hence, it follows that the decision variables xi and xj exhibit interactive characteristics in circumstances where ΔiF|xj=b1ΔiF|xj=b2<0 is applicable. This implies that the monotonicity of the objective function F is jointly determined by xi and xj. In single-objective problems, the interaction between decision variables is directly discernible. However, in LSMOPs, it is assumed that at least one of the m objectives demonstrates a correlation between variables i and j, indicates their interaction. In LSMOPs, optimizing groups of connected decision variables improves results. Yet, the complexity and quantity of these variables can lead to oversized groups, maintaining the large-scale challenge.

The links between decision variables in LSMOPs are complex, and dividing subgroups based on interactions will lead to oversized subgroups, but disregarding interactions will segment the indivisible problem and result in not getting the genuine optimal solution. Therefore, a weighted overlapping grouping decision variable division strategy is proposed to address the issue of excess decision variables with interconnections, and the grouping protects the integrity of the indivisible problem. As shown in Algorithm 2.

images

In the interaction analysis of convergent variables due to the huge decision space, multiple interactions are analyzed to learn the real interaction situation and to avoid failing to detect the real interaction situation due to the absence of interactions in some subspaces. The number of detected interactions is used as the weight of the interactions between the variables, and the higher number of interactions indicates a closer connection between the two variables. At the same time, two variables in the multi-objective problem have an interaction with one or more objectives, thus the number of interacting objectives is used as the second weight to indicate the intensity of the interaction (Steps 1–4). Next, decision variables are segmented into weighted overlapping groups according to interactions (Steps 8–16). Grouping directly or indirectly related decision variables together (Steps 8–11). Weighted overlapping grouping is applied to group sizes larger than the threshold alpha, which maintains the connections between variables within the subgroups while refining the subgroups (Steps 12–16). After neglecting the variable k with the most interactive performance and then regrouping the remaining variables efficiently by interaction, finally, k is added as an overlap within each regrouped group.

To demonstrate the weighted overlapping group strategy, we apply it to the Benchmark MOP proposed by Walking Fish Group (WFG) with a setup of 10 decision variables and 3 objectives. Fig. 2a shows how other methods group directly connected variables in WFG6, for comparison. In the test problem, x1 and x2 are diversity variables that are not subject to interaction analysis. The diagram reveals that x3 has direct connections with x5 and x6, and indirect connections with other variables through intermediary decision variables. This linkage via intermediaries often results in overly large groups, such as assigning the seven variables from x3 to x10 into the same group, which can reduce optimization performance. By omitting variables with the strongest interactions, we refine the decision variables: x3,x4,x5,x6 and x8 from Subgroup 1; x7 and x9, Subgroup 2, as shown in Fig. 2b. Reintegrating x10 into each subgroup ensures a balance between subgroup size and internal connectivity. The example uses 10 decision variables to effectively demonstrate the weighted overlapping grouping. This approach is highly practical for LSMOPs.

images

Figure 2: Weighted overlapping grouping

3.4 The Optimization Method for Grouping Decision Variables

After weighted overlapping grouping of decision variables, subgroups are collaboratively optimized using a direction-guided evolutionary method. This involves preselecting individuals to inherit directional cues, steering them toward optimal evolution in their respective subspaces, and focusing on both convergence and diversity. To improve convergence in less cohesive populations, the direction shifts from dominant to non-dominant solutions, and to enhance diversity, it moves between non-dominant solutions.

images

The algorithm includes two main steps: preselecting individuals for directional information inheritance (Steps 1–17), and evolving non-dominance individuals using direction cues from preselected and non-dominance individuals (Steps 18–25). Pre-selection is primarily done to improve parental population diversity in order to facilitate more effective evolution, i.e., individuals of the parent that balance diversity and convergence can lead to better overall evolution. Angle-penalized distance APD to assess individual diversity and convergence as a way to select a few less convergent but well-distributed solutions (Steps 13–15). During evolution, pre-selected individuals keep the population from reaching a local optimum and pre-selected non-dominated individuals allow convergence to be guaranteed, while the reference vector approach selects dominated individuals to ensure diversity. If the number of dominant solutions is below threshold r, indicating good diversity, all dominant solutions are chosen to enhance population convergence (Steps 19–22). Otherwise, selection from non-dominant solutions expands diversity. Directional information is then added to preselected individuals, completing the evolution process, in which step size γ is determined randomly from a normal distribution (Steps 23–27). A balance in step size is crucial; too large causes instability and local optima, while too small results in low efficiency. Thus, using the average length of the non-dominant solutions and direction vectors as the evolutionary step size is more effective.

It is observed in Fig. 3 in which X3 is a dominated solution and X1 and X2 are non-dominated individuals. When there are many non-dominant solutions, suggesting room for convergence improvement, the direction shifts from dominance solution X3 to non-dominance X2. Conversely, a scarcity of non-dominant solutions, indicating low diversity, leads to a directional shift from non-dominance X2 to X1, enhancing diversity potential.

images

Figure 3: Direction-guided evolutionary

3.5 Discussions

The core operations of the proposed MOEAWOD algorithm encompass two key components: the first involves weighted overlapping grouping, and the second entails directed population evolution. In weighted overlap grouping, the convergence and diversity of decision variables are initially analyzed. Subsequently, each bit of the n-dimensional variable is classified after a constant number of perturbations, resulting in a time complexity of O(n). Following the analysis, the interaction between each pair of convergent variables is examined to calculate the weight, and the time complexity is O(n2). Subsequently, the group is overlapped according to the weight. The procedure involves selecting the most interactive variable each time, and regrouping the group until the group size meets the specified threshold, and the time complexity is O(logn). In the process of direction-guided population evolution, preselected individuals exhibiting both diversity and convergence accept directional information for evolution. Subsequently, directional information is added to the preselected individuals, and the time complexity of both operations is O(1).

In summary, the time complexity for decision variable grouping is O(n2). For LSMOPs, an increase in the dimension of decision variables results in an exponential expansion of the decision space. While weighted overlapping grouping introduces a computational complexity of O(n2), it significantly reduces the decision space compared to the original, enabling faster local convergence. Moreover, the combination of variables enhances the exploration of the potential optimization space more effectively.

4  Experiments

4.1 Experimental Setup

This study evaluates the performance of the proposed MOEAWOD through the utilization of two distinct test sets: the large-scale multi-objective optimization set LSMOPs and the continuous multi-objective set WFG. LSMOPs, comprising LSMOP1-LSMOP9, feature complex decision variable interactions, non-uniform variable grouping, and intricate fitness landscapes [39]. WFG, containing WFG1-WFG9, includes problems with complex Pareto fronts and optimal sets, suitable for assessing overall performance in diverse multi-objective scenarios [49]. We compare our algorithm with other advanced LSMOAs to validate its practical problem-solving effectiveness.

Different types of advanced LSMOAs were selected for comparison, including methods based on decision variable grouping and those reconstructing LSMOPs. Key comparison algorithms are the Decision Variable Clustering-based Evolutionary Algorithm (LMEA) [50], Grouped and Linked Polynomial Mutation Operator (GLMO) [51], Decision Variable Analysis and Reconstruction-based Multi-objective Evolutionary Algorithm (LERD) [52], Fast Large-scale Evolutionary Algorithm (FLEA) guided by reference [53]. All algorithms were sourced from PlatEMO 4.2, a platform encompassing various multi-objective algorithms, including those for large-scale problems [54]. The experiments were conducted on an Intel i5 PC with a 2.4 GHz CPU, Windows 10 OS, and implemented in Matlab-R2021b. Parameters are detailed in the following Table 1. Decision variables were perturbed 20 times for analysis, and categorized into convergence and diversity variables. For convergence variable grouping, the interaction was set at 6 times to assess interactivity strength. The grouping threshold was set at 60, with weighted overlapping grouping applied when the number of directly or indirectly connected variables exceeded this threshold.

images

In multi-objective problems with multiple outcome objectives, performance cannot be measured solely by objective values. Consequently, the overall performance of the algorithm is evaluated using the Inverted Generational Distance plus (IGD+) metric [5557]. IGD+ effectively evaluates the convergence and diversity of the population. The formula is as follows:

IGD+(P)=1|Z|j=1|z|minpiPd(pi,zj)(3)

where dIGD+(p,z)=k=1m(max{pkzk,0})2 (minimization problems). Z denotes the reference points on the Pareto front of the specific problem, while P representing the results obtained from the algorithm. IGD+ evaluates the quality of solutions by calculating, the average distance from each reference point to the nearest point in the dominated region by the solution set. When the IGD+ value is low, it indicates a high degree of similarity between P and Z, reflecting better convergence and distribution of P.

4.2 Experimental Results and Analysis

Initially, the performance of the algorithm is assessed on LSMOPs test problems, using the IGD+ metric to evaluate the convergence, diversity, Pareto optimality, and overall distribution characteristics of the populations generated by the proposed evolutionary algorithm. Table 2 compares the performance of five algorithms on the LSMOPs test series with 300, 500, and 1000 decision variables and 3 objectives. It shows the average IGD+ values and standard deviations for each algorithm over 30 independent runs on each test case. The Wilcoxon rank-sum test was employed for statistical tests with a significance level of 5%. Bold text indicates the best solutions among all compared algorithms. In the table, ‘+’ denotes IGD+ metrics better than the MOEAWOD algorithm, ‘−’ indicates inferior solutions, and ‘=' signifies similar results to the proposed algorithm.

images

The LSMOPs test problems, designed for complex large-scale multi-objective optimization, effectively reflect the performance of an algorithm in large-scale scenarios. The IGD+ values of five algorithms on LSMOPs with a 3-dimensional objective and decision variables set at 300, 500, and 1000 dimensions are shown in the table below. In this table and in the following ones, ‘D’ denotes the dimensions of the decision variable.

As Table 2 shows, in the LSMOPs test suites, the MOEAWOD secured the best results in 12 out of the tests, indicated by black bold text. Following that, LERD in 7 and LMEA in only 1, but none matched the overall performance of MOEAWOD. When the decision variable is set at 1000 dimensions, MOEAWOD achieves the optimal solution 6 times in the LSMOPs test suite. This performance is notably superior to other comparison algorithms, providing evidence of its effectiveness in addressing LSMOPs. Notably, in LSMOP1 with 500 and 1000 decision variables, the IGD+ of MOEAWOD was significantly lower than its competitors. However, its performance in LSMOP6 with 500 decision variables was weaker, likely due to insufficient analysis of decision variable convergence and less accurate capture of variable relationships in weighted overlapping grouping. Despite this, the overall superiority of MOEAWOD in LSMOP test suites over LMEA, GLMO, LERD, and FLEA validates the effectiveness of the proposed weighted overlapping group-based MOEAs in handling LSMOPs.

In addition, the performance of the algorithm is evaluated on the WFG test suite, characterized by complex variable interactions and more frequent interactions between different variables. Table 3 compares the performance of the five algorithms across WFG test cases with 300, 500, and 1000 decision variables and 3 objectives.

images

In these tests, the MOEAWOD algorithm achieved the best results in 11 cases, while LMEA excelled in 8 cases. The WFG tests feature complex variable interactions, with more frequent interactions among different variables. The clustering approach of LMEA for variable grouping yielded good results, but overall, MOEAWOD showed superior optimization performance. The effectiveness of MOEAWOD in handling strong variable interactions outperformed LMEA, GLMO, LERD, and FLEA, highlighting its advantage in LSMOPs.

Fig. 4 displays the final populations optimized by the comparative algorithms on LSMOP1, involving 500 decision variables and 3,000,000 iterations. GLMO, and LERD achieved objective values around 1 but lacked uniform distribution and diversity. LMEA and FLEA were farther from the Pareto optimal reference, indicating poorer convergence, with the solutions of FLEA being particularly concentrated, showing limited diversity. MOEAWOD demonstrated superior performance, maintaining objective values close to 1 and exhibiting a more uniform distribution. However, there was some deviation from the true Pareto optimal reference. Such results may be attributed to errors in the analysis of the interaction among decision variables. In Fig. 5, the LSMOP8 test with 300 variables and 3 objectives, GLMO and LERD lack diversity, LMEA and FLEA struggle with convergence, and the solutions of FLEA concentrated between 5–15 in one objective, indicating poor convergence in that area. MOEAWOD algorithm maintains convergence on several objectives with decent results, the overall distribution of solutions is uniform, and the convergence and diversity are superior to several other algorithms.

images

Figure 4: Effectiveness of comparison algorithms in LSMOP1 test problem with 500-dimensional 3-objective

images

Figure 5: Effectiveness of comparison algorithm in the LSMOP8 test problem with 300-dimensional 3-objective

Fig. 6 illustrates the progression of IGD+ values for different algorithms applied to two LSMOP test problems. Owing to the poor convergence performance exhibited by certain algorithms in LSMOPs, 100-dimensional LSMOP tests were employed to facilitate a more distinct comparison. The decision variable is 100-dimensional, the objective number is 3, and the maximum number of function evaluations is 1,000,000. Several comparison algorithms experienced varying degrees of rebound during convergence, indicating instability. MOEAWOD, however, showed stable convergence in both tests, continually evolving until convergence with noticeably superior IGD+ values compared to others, demonstrating the most stable convergence performance and optimal IGD+ results on both problems.

images

Figure 6: Evolution of IGD+ metrics for the contrasting algorithms on the LSMOP test problem

5  Conclusion

This paper addresses the inefficiency of optimization algorithms in LSMOPs due to high decision variable dimensions by proposing MOEAWOD. Currently, group-based LSMOAs typically factor in decision variable interactions for grouping but overlook the intricate nature of variable interactions in LSMOPs. Having over 100 decision variables in a group still qualifies as an LSMOP, contradicting the original intent of divide-and-conquer.

The proposed MOEAWOD assesses the interactivity of decision variables using two weights and designates the most interactive decision variables to each group as overlapping elements. This strengthens the interactivity within each decision variable group while simultaneously diminishing the interactivity between different groups. Subsequently, each group underwent optimization employing a direction-guided population evolution strategy. The experimental results on various LSMOPs benchmark test suites demonstrate the remarkable convergence and diversity of the proposed MOEAWOD algorithm. The superiority of its IGD+ index over other algorithms substantiates its effectiveness.

Nevertheless, MOEAWOD still exhibits certain limitations that warrant further investigation in future studies. Grouping based on weights acquired through DVA of random individuals before population evolution may result in interaction information that only represents the local subspace of the analyzed individuals. This could result in an inaccurate analysis of decision variables, limited to the partial space rather than the entire huge decision variable space. Additionally, the reliance on a limited number of evaluations for the weight measurement in the DVA process may introduce errors that deviate from the actual situation, potentially impacting the performance of LSMOAs. Future work will delve into the dynamic analysis of decision variables, and DVA will be conducted based on the stage of population evolution to attain precise variable interactions. Additionally, there is consideration given to designing reasonable weights to achieve a more accurate evaluation of interaction performance.

Acknowledgement: None.

Funding Statement: This work was supported in part by the Central Government Guides Local Science and Technology Development Funds (Grant No. YDZJSX2021A038); in part by the National Natural Science Foundation of China under (Grant No. 61806138); in part by the China University Industry-University-Research Collaborative Innovation Fund (Future Network Innovation Research and Application Project) (Grant 2021FNA04014).

Author Contributions: The authors confirm contribution to the paper as follows: study conception and design: Liang Chen, Jingbo Zhang, Linjie Wu; data collection: Jingbo Zhang, Yubin Xu, Xingjuan Cai; analysis and interpretation of results: Liang Chen, Yubin Xu, Xingjuan Cai; draft manuscript preparation: Liang Chen, Jingbo Zhang, Linjie Wu, Xingjuan Cai. All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials: The dataset used to support the results of this study is available at https://GitHub.com/BIMK/PlatEMO/releases/tag/v4.2.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

1. Tian, Y., Si, L., Zhang, X., Cheng, R., He, C. et al. (2021). Evolutionary large-scale multi-objective optimization: A survey. ACM Computing Surveys, 54(8), 1–34. [Google Scholar]

2. Sonmez, R., Aminbakhsh, S., Atan, T. (2020). Activity uncrashing heuristic with noncritical activity rescheduling method for the discrete time-cost trade-off problem. Journal of Construction Engineering Management, 146(8), 04020084. [Google Scholar]

3. Khoshoo, B., Blank, J., Pham, T. Q., Deb, K., Foster, S. N. (2023). Optimal design of electric machine with efficient handling of constraints and surrogate assistance. Engineering Optimization, 56(2), 274–292. [Google Scholar]

4. Azad, S. K., Aminbakhsh, S. (2022). ε-constraint guided stochastic search with successive seeding for multi-objective optimization of large-scale steel double-layer grids. Journal of Building Engineering, 46, 103767. https://doi.org/10.1016/j.jobe.2021.103767 [Google Scholar] [CrossRef]

5. Yan, X., Zuo, H., Hu, C., Gong, W., Sheng, V. S. (2023). Load optimization scheduling of chip mounter based on hybrid adaptive optimization algorithm. Complex System Modeling and Simulation, 3(1), 1–11. [Google Scholar]

6. Yao, B., Chen, C., Shan, W., Yu, B. (2022). Artificial leaf-vein network optimisation algorithm for urban transportation network design. International Journal of Bio-Inspired Computation, 20(4), 256–268. [Google Scholar]

7. Zheng, Z., Wu, S., Huang, Q., Yang, J. (2022). Research on localisation algorithm of large irregular workpiece for industrial robot. International Journal of Computing Science and Mathematics, 15(1), 30–42. [Google Scholar]

8. Wang, F., Huang, Z., Wang, S. (2023). I ε+ LGEA a learning-guided evolutionary algorithm based on I ε+ indicator for portfolio optimization. Complex System Modeling and Simulation, 3(3), 191–201. [Google Scholar]

9. Wu, L., Wu, D., Zhao, T., Cai, X., Xie, L. (2023). Dynamic multi-objective evolutionary algorithm based on knowledge transfer. Information Sciences, 636, 118886. [Google Scholar]

10. Ren, H., Zhang, D. (2023). Non-destructive diagnosis of knee osteoarthritis based on sparse coding of MRI. International Journal of Computing Science and Mathematics, 18(3), 276–285. [Google Scholar]

11. Suman, S., Chatterjee, D., Mohanty, R. (2023). Power quality improvement for microgrid-connected PV-based converters under partial shading conditions using mixed optimisation algorithms. International Journal of Bio-Inspired Computation, 21(3), 123–136. [Google Scholar]

12. Chen, G., Guo, Y., Huang, M., Gong, D., Yu, Z. (2022). A domain adaptation learning strategy for dynamic multiobjective optimization. Information Sciences, 606, 328–349. [Google Scholar]

13. Cai, X., Lan, Y., Zhang, Z., Wen, J., Cui, Z. et al. (2021). A many-objective optimization based federal deep generation model for enhancing data processing capability in IoT. IEEE Transactions on Industrial Informatics, 19(1), 561–569. [Google Scholar]

14. Cui, Z., Zhao, T., Wu, L., Qin, A., Li, J. (2023). Multi-objective cloud task scheduling optimization based on evolutionary multi-factor algorithm. IEEE Transactions on Cloud Computing, 11(4), 3685–3699. [Google Scholar]

15. Cai, X., Wu, L., Zhao, T., Wu, D., Zhang, W. et al. (2024). Dynamic adaptive multi-objective optimization algorithm based on type detection. Information Sciences, 654, 119867. [Google Scholar]

16. Verma, P., Parouha, R. P. (2022). Solving systems of nonlinear equations with real world problems using an advanced hybrid algorithm. International Journal of Bio-Inspired Computation, 19(4), 238–252. [Google Scholar]

17. Deb, K., Jain, H. (2013). An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Transactions on Evolutionary Computation, 18(4), 577–601. [Google Scholar]

18. Shu, Z., Song, A., Wu, G., Pedrycz, W. (2023). Variable reduction strategy integrated variable neighborhood search and NSGA-II hybrid algorithm for emergency material scheduling. Complex System Modeling and Simulation, 3(2), 83–101. [Google Scholar]

19. Zhang, J., Gong, B., Waqas, M., Tu, S., Chen, S. (2023). Many-objective optimization based intrusion detection for in-vehicle network security. IEEE Transactions on Intelligent Transportation Systems, 24(12), 15051–15065. [Google Scholar]

20. Cui, Z., Jin, Y., Zhang, Z., Xie, L., Chen, J. (2023). An interval multi-objective optimization algorithm based on elite genetic strategy. Information Sciences, 648, 119533. [Google Scholar]

21. Filipkovska, M. S. (2022). Two combined methods for the global solution of implicit semilinear differential equations with the use of spectral projectors and Taylor expansions. International Journal of Computing Science and Mathematics, 15(1), 1–29. [Google Scholar]

22. Bai, H., Fan, T., Niu, Y., Cui, Z. (2022). Multi-UAV cooperative trajectory planning based on many-objective evolutionary algorithm. Complex System Modeling and Simulation, 2(2), 130–141. [Google Scholar]

23. Shen, X., Pan, H., Ge, Z., Chen, W., Song, L. et al. (2023). Energy-efficient multi-trip routing for municipal solid waste collection by contribution-based adaptive particle swarm optimization. Complex System Modeling and Simulation, 3(3), 202–219. [Google Scholar]

24. Guo, K., Bai, Z., Ma, Z. (2023). Application of particle swarm optimisation algorithm in manipulator compliance control. International Journal of Computing Science and Mathematics, 18(2), 113–127. [Google Scholar]

25. Deng, L., Di, Y., Song, L., Gong, W. (2023). LTCSO/D: A large-scale tri-particle competitive swarm optimizer based on decomposition for multiobjective optimization. Applied Intelligence, 53(20), 24034–24055. [Google Scholar]

26. Li, W., Chen, Y., Cai, Q., Wang, C., Huang, Y. et al. (2022). Dual-stage hybrid learning particle swarm optimization algorithm for global optimization problems. Complex System Modeling and Simulation, 2(4), 288–306. [Google Scholar]

27. Li, B., Zhang, Y., Yang, P., Yao, X., Zhou, A. (2023). A two-population algorithm for large-scale multi-objective optimization based on fitness-aware operator and adaptive environmental selection. IEEE Transactions on Evolutionary Computation. https://doi.org/10.1109/TEVC.2023.3296488 [Google Scholar] [CrossRef]

28. Ma, M., Wang, W., Li, X. (2023). Multi-population artificial bee colony algorithm based on the nearest neighbour partition. International Journal of Computing Science and Mathematics, 18(3), 235–244. [Google Scholar]

29. Liu, S., Lin, Q., Li, J., Tan, K. C. (2023). A survey on learnable evolutionary algorithms for scalable multiobjective optimization. IEEE Transactions on Evolutionary Computation, 27(6), 1941–1961. [Google Scholar]

30. Qiao, K., Liang, J., Qu, B., Yu, K., Yue, C. et al. (2022). Differential evolution with level-based learning mechanism. Complex System Modeling and Simulation, 2(1), 35–58. [Google Scholar]

31. Hu, Z., Fang, C., Wang, Z., Tseng, S. M., Dong, M. (2023). Many-objective optimization based-content popularity prediction for cache-assisted cloud-edge-end collaborative IoT networks. IEEE Internet of Things Journal, 11(1), 1190–1200. [Google Scholar]

32. Gong, J., Yan, X., Hu, C. (2022). An ensemble-surrogate assisted cooperative particle swarm optimisation algorithm for water contamination source identification. International Journal of Bio-Inspired Computation, 19(3), 169–177. [Google Scholar]

33. Xie, F., Li, L., Li, L., Huang, Y., He, Z. (2023). A decomposition-based multi-objective Jaya algorithm for lot-streaming job shop scheduling with variable sublots and intermingling setting. Expert Systems with Applications, 228, 120402. [Google Scholar]

34. Wang, Z., Pei, Y., Li, J. (2023). A survey on search strategy of evolutionary multi-objective optimization algorithms. Applied Sciences, 13(7), 4643. [Google Scholar]

35. Sander, F., Zille, H., Mostaghim, S. (2018). Transfer strategies from single-to multi-objective grouping mechanisms. Proceedings of the Genetic and Evolutionary Computation Conference, pp. 729–736. New York, NY, USA, Association for Computing Machinery. [Google Scholar]

36. Yang, X., Zou, J., Yang, S., Zheng, J., Liu, Y. (2021). A fuzzy decision variables framework for large-scale multiobjective optimization. IEEE Transactions on Evolutionary Computation, 27(3), 445–459. [Google Scholar]

37. Wang, S. T., Zheng, J. H., Liu, Y., Zou, J., Yang, S. X. (2023). An extended fuzzy decision variables framework for solving large-scale multiobjective optimization problems. Information Sciences, 643, 119221. [Google Scholar]

38. Li, L., Li, Y., Lin, Q., Liu, S., Zhou, J. et al. (2023). Neural net-enhanced competitive swarm optimizer for large-scale multiobjective optimization. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2023.3287596 [Google Scholar] [PubMed] [CrossRef]

39. Ma, X., Liu, F., Qi, Y., Wang, X., Li, L. et al. (2016). A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables. IEEE Transactions on Evolutionary Computation, 20(2), 275–298. [Google Scholar]

40. Gao, M., Feng, X., Yu, H., Li, X. (2023). A large-scale multiobjective evolutionary algorithm with overlapping decomposition and adaptive reference point selection. Applied Intelligence, 53, 21576–21605. https://doi.org/10.1007/s10489-023-04596-3 [Google Scholar] [CrossRef]

41. Chen, H., Cheng, R., Wen, J., Li, H., Weng, J. (2020). Solving large-scale many-objective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations. Information Sciences, 509, 457–469. [Google Scholar]

42. Antonio, L. M., Coello, C. A. C. (2013). Use of cooperative coevolution for solving large scale multiobjective optimization problems. Proceedings of the 2013 IEEE Congress on Evolutionary Computation, pp. 2758–2765. Cancun, Mexico, IEEE. [Google Scholar]

43. Cao, B., Zhao, J., Gu, Y., Ling, Y., Ma, X. (2020). Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm Evolutionary Computation, 53, 100626. [Google Scholar]

44. Li, Y., Li, L., Tang, H., Lin, Q., Ming, Z. et al. (2023). Redefined decision variable analysis method for large-scale optimization and its application to feature selection. Swarm Evolutionary Computation, 82, 101360. [Google Scholar]

45. Song, A., Yang, Q., Chen, W. N., Zhang, J. (2016). A random-based dynamic grouping strategy for large scale multi-objective optimization. Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 468–475. Vancouver, BC, Canada, IEEE. [Google Scholar]

46. Li, M., Wei, J. (2018). A cooperative co-evolutionary algorithm for large-scale multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1716–1721. New York, NY, USA, Association for Computing Machinery. [Google Scholar]

47. Guo, Y., Ge, S., Huang, Y., Zhang, Y., Jiang, E. et al. (2023). Low-carbon routing based on improved artificial bee colony algorithm for electric trackless rubber-tyred vehicles. Complex System Modeling and Simulation, 3(3), 169–190. [Google Scholar]

48. George, S. S., Pramila, R. S. (2023). Improved whale social optimisation algorithm and deep fuzzy clustering for optimal and QoS-aware load balancing in cloud computing. International Journal of Bio-Inspired Computation, 22(1), 40–52. [Google Scholar]

49. Huband, S., Hingston, P., Barone, L., While, L. (2006). A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5), 477–506. [Google Scholar]

50. Zhang, X., Tian, Y., Cheng, R., Jin, Y. (2016). A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization. IEEE Transactions on Evolutionary Computation, 22(1), 97–112. [Google Scholar]

51. Zille, H. (2019). Large-scale multi-objective optimisation: New approaches and a classification of the state-of-the-art (Dissertation). Otto von Guericke University, Magdeburg, Germany. [Google Scholar]

52. He, C., Cheng, R., Li, L., Tan, K. C., Jin, Y. (2022). Large-scale multiobjective optimization via reformulated decision variable analysis. IEEE Transactions on Evolutionary Computation, 28(1), 47–61. [Google Scholar]

53. Li, L., He, C., Cheng, R., Li, H., Pan, L. et al. (2022). A fast sampling based evolutionary algorithm for million-dimensional multiobjective optimization. Swarm Evolutionary Computation, 75, 101181. [Google Scholar]

54. Tian, Y., Zhu, W., Zhang, X., Jin, Y. (2023). A practical tutorial on solving optimization problems via PlatEMO. Neurocomputing, 518, 190–205. [Google Scholar]

55. Ishibuchi, H., Imada, R., Masuyama, N., Nojima, Y. (2019). Comparison of hypervolume, IGD and IGD+ from the viewpoint of optimal distributions of solutions. Proceedings of the Evolutionary Multi-Criterion Optimization: 10th International Conference, pp. 332–345. East Lansing, MI, USA, Springer. [Google Scholar]

56. Guo, Y., Chen, G., Jiang, M., Gong, D., Liang, J. (2022). A knowledge guided transfer strategy for evolutionary dynamic multiobjective optimization. IEEE Transactions on Evolutionary Computation, 27(6), 1750–1764. [Google Scholar]

57. Chen, G., Guo, Y., Wang, Y., Liang, J., Gong, D. et al. (2023). Evolutionary dynamic constrained multiobjective optimization: Test suite and algorithm. IEEE Transactions on Evolutionary Computation. https://doi.org/10.1109/TEVC.2023.3313689 [Google Scholar] [CrossRef]


Cite This Article

APA Style
Chen, L., Zhang, J., Wu, L., Cai, X., Xu, Y. (2024). Large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables. Computer Modeling in Engineering & Sciences, 140(1), 363-383. https://doi.org/10.32604/cmes.2024.049044
Vancouver Style
Chen L, Zhang J, Wu L, Cai X, Xu Y. Large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables. Comput Model Eng Sci. 2024;140(1):363-383 https://doi.org/10.32604/cmes.2024.049044
IEEE Style
L. Chen, J. Zhang, L. Wu, X. Cai, and Y. Xu "Large-Scale Multi-Objective Optimization Algorithm Based on Weighted Overlapping Grouping of Decision Variables," Comput. Model. Eng. Sci., vol. 140, no. 1, pp. 363-383. 2024. https://doi.org/10.32604/cmes.2024.049044


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 161

    View

  • 151

    Download

  • 0

    Like

Share Link