The study of optimization methods for reliability–redundancy allocation problems is a constantly changing field. New algorithms are continually being designed on the basis of observations of nature, wildlife, and humanity. In this paper, we review eight major evolutionary algorithms that emulate the behavior of civilization, ants, bees, fishes, and birds (i.e., genetic algorithms, bee colony optimization, simulated annealing, particle swarm optimization, biogeography-based optimization, artificial immune system optimization, cuckoo algorithm and imperialist competitive algorithm). We evaluate the mathematical formulations and pseudo-codes of each algorithm and discuss how these apply to reliability–redundancy allocation problems. Results from a literature survey show the best results found for series, series–parallel, bridge, and applied case problems (e.g., overspeeding gas turbine benchmark). Review of literature from recent years indicates an extensive improvement in the algorithm reliability performance. However, this improvement has been difficult to achieve for high-reliability applications. Insights and future challenges in reliability–redundancy allocation problems optimization are also discussed in this paper.
Continuous advancements in technology combined with the expectation for high performance by manufacturers and consumers add increasing complexity to modern manufactured systems. However, human errors during utilization (e.g., processing error, improper storage, poor equipment maintenance, or environmental impact) negatively affect the global performance and the designed life cycle of a manufactured system. Thus, system reliability is an important concern when designing and improving the efficiency of any engineering system. Reliability is defined as the likelihood of effectively achieving a set of functionality goals or product functions within a specified timeframe and in a regulated environment, even given uncertainty and inadequate knowledge of system actions.
System reliability design problems are divided into three categories: redundancy allocation problems, reliability allocation problems, and reliability–redundancy allocation problems (RRAPs). Common elements for all of these include decision variables, constraints, and objective function(s). Decision variables represent factors that can be altered or decisions that can be made to enhance system reliability. Constraints are factors that limit design, such as price, volume, weight, and reasonable specified reliability. An objective function(s) calculates the overall reliability function on the basis of defined values for the decision variables that comply with constraints of the optimal solution. The objective function may consist of a single function (e.g., system reliability to be maximized) or multiple functions (e.g., adding system costs while needing to minimize system volume).
System reliability optimization techniques evolved during the first half of the 20th century to support developments in mathematics, computing capabilities, optimization methodologies, system complexity, design, and management requirements [
The first use of dynamic programming to solve reliability problems was done for redundancy allocation problems [
EAs are a large family of optimization methods that can evolve from an initial and random solution to the best optimum by successive iterations. This technique is based on the Darwinian concept: weak solutions face extinction, whereas the better ones combine to produce new solutions that potentially can improve the convergence to an optimal solution. These strategies are based on natural observations and try to mimic animal behaviors (e.g., those of ants, bees, fish, and birds), human phenomena (e.g., immigration and colonization), and even biological interactions (e.g., an immune system). Presently, genetic algorithms (GAs), bee colony optimization (BCO), simulated annealing (SA), particle swarm optimization (PSO), biogeography-based optimization (BBO), artificial immune system optimization (AISO), cuckoo algorithm (CA), and imperialist competitive algorithm (ICA) are among the most widely used EAs in different optimization fields such as consecutively connected systems [
In this paper, we discuss and review the evolution of improvements to the optimization of complex system reliability for EA solutions. We focus on the problem of reliability–redundancy allocation and present the concepts and findings of the key EAs currently in use. This review paper summarizes the best obtained results for series, series–parallel, and complex systems. The remainder of this paper is organized as follows. In Section 2, the reliability–redundancy allocation problems are introduced. The detail of their mathematical formulation is described in section 3. In section 4, a detailed discussion about a major effort to apply an EA for an RRAP optimization is included. A review of constraint handling techniques for constrained mixed-integer nonlinear problems is included in section 5. The latest results findings are resumed in section 6. In section 7, future challenges in RRAP optimization are discussed. Finally, the conclusion is included in section 8.
Engineering systems can be generally structured in four types: series, parallel, series–parallel, and complex systems (
Three typical forms of the reliability optimization problem exist for these types of systems: The redundancy allocation problem, the reliability allocation problem and the reliability-redundancy allocation problem (RRAP). Due to the assumptions and type of the problem, the solution approaches for each are different.
Redundancy allocation problems designate systems composed of discrete component types, with fixed reliability, cost and weight. The practical problem is choosing the optimal set of components types (decision variables) to meet constraints (reliability, weight, cost, volume) to achieve the objective function (maximize reliability, minimize weight, etc.). In this type of problem, there are ci discrete component type choices available for each subsystem. For each subsystem, ni components must be selected from the ci available choices. Redundancy allocation problems are often considered for series-parallel systems.
For the reliability allocation problem, the system structure is fixed and the component reliability values are continuous decision variables. Component parameters (cost, weight, volume) are functions of the component reliability. Increasing the system reliability by using more reliable components rises the cost, and probably weight and volume, which may be incorporated as part of the constraints or the objective function.
The reliability-redundancy allocation problem is the most general problem formulation. Each subsystem has ni components with reliability of ri as decision variables. The challenge then is to assign redundancy and reliability optimally to the components of each subsystem in order to optimize the overall reliability of the system. The system and subsystems can be in any arrangement.
For each series, series–parallel, and bridge system, the reliabilities are defined, respectively, as:
The aim of the RRAP is to improve the reliability of the entire system under a specified controlled environment, based on the cost, weight, and volume of the system. The system cost function
The weight function of the system depends on the number of components per subsystem and the number of subsystems, and can be defined as [
Early studies used a range of 3 to 9 for the weight function parameter [
The system volume function also depends on the number of components per subsystem and the number of subsystems, and it can be defined as [
In summary, the general RRAP can be expressed in the following formulation, in which the reliability is considered as the objective function:
All EAs share three common features: population, fitness, and evolution. Population represents the set of individual solutions that are initially randomized and then optimized to attain a final solution. A constant population size is generally maintained during the evolution process. Fitness ranks the population/solution according to some established performance criteria. Fitter individuals impact the remaining individuals through the evolutionary process. To explore a problem's solution space to attain the best population, random or controlled variation is important.
The first use of an EA for a system reliability problem was in the field of GA [
SA was first introduced by [
Reference [
Reference [
Timeline of the cited evolutionary algorithms is shown in
The following algorithm descriptions are applied for RRAPs with these assumptions:
The component supply is infinite, and all redundant components are similar for each individual subsystem. The failures of individual components are independent. Failed components do not harm the system and are not fixed. The weight, cost, and volume of the components are known and deterministic.
GA is based on genetic mechanism and the evolution of species by Darwin using concepts such as mutation, crossing over, the survival of the best individual, and natural selection [
The evolutionary process begins when all fitness values of the initial population have been assigned. Each individual is evaluated on the basis of the results from the fitness function. The individual follows a selection process where the best (fittest) survives and is selected to continue regeneration and the others disappear. Parents exchange their genetic information randomly to produce an innovative child population. This process includes the selection of crossover and mutation operators. To maintain constant population size, parents are then replaced by children in the population. This reproduction (selection, crossover, and mutation) is repeated and takes place on the basis of the probability of crossover (Pc) and the probability of mutation (Pm). The procedure is iteratively subjected to an assessment of the solution's relevance until the solution reaches the global optimum with a defined iteration number.
The selection operator plays an essential role because it is the first operator applied to the population. The goal of this operator is to select above average solutions and to remove below average solutions from the population of the next generation using the Principle “survival of the relatively fit.” In this review, we define a size two tournament selection processes with replacement as the selection operator [ When both the chromosomes are feasible, the one with the higher fitness value is chosen. When one chromosome is feasible and the other is infeasible, the one that is feasible is chosen. When both chromosomes are infeasible with unequal constraint violations, the chromosome with the less constraint violation is chosen. When both the chromosomes are infeasible with equal constraint violations, then either chromosome is chosen.
A crossover operator is added to the surviving chromosomes after the selection process. This is the operation that really empowers a GA. It simultaneously acts on two or more parent chromosomes and produces offspring by recombining features of the parent solutions. In this review, we describe an intermediate crossover for integer variables and a power crossover for floating point variables [
In the case of intermediate crossover,
In the case of power crossover,
Finally, the new two children
The purpose of the mutation procedure is to introduce random variations into the population. These random variations are used to prevent the search process from converging to local optima. The mutation process sometimes helps recover the data lost in previous generations and is responsible for the system's fine tuning. This operator only applies to a single chromosome. Usually, its rate is very low. In this review, we discuss the one-neighborhood mutation for integer variables method [
In the case of one-neighborhood mutation,
In the case of non-uniform mutation,
Thus, the final gene is
Developed by [
In the initialization stage of a BCO, bees randomly select food source positions and nectar amounts are determined by a fitness function. These bees return to the hive and share this nectar source information with bees waiting in the dance area of the hive. Next, the second stage occurs, where every employed bee goes back to the same food source area visited during the previous cycle. Thus, the probability ph of an onlooker bee choosing a preferred food source at Xh is defined as
After a solution is generated, this solution is improved by using a local search by a process called the “greedy selection process” carried out by onlooker and employed bees. The greedy selection process is defined by the following equation:
If a particular food source solution does not improve for a predetermined number of iterations, then a new food source will be searched for by its associated bee, who now becomes a scout. Then, this randomly generated food source is similarly allocated to this scout and its status is changed to scout for hire. Hence, another algorithm iteration/cycle begins until either the termination condition, maximum number of cycles, or relative error limit is reached.
The different steps of a BCO are summarized in the following pseudo-code:
SA [
SA emulates the physical concepts of temperature and energy. The objective function of an SA is treated as the energy of a dynamic system while the temperature is subjected to a randomized search for a solution. For a simulation, the state of the dynamic system is connected to the state of the system that is being optimized. The procedure is the following: The system is submitted to a high temperature and is then cooled slowly through a sequence of changes in temperature degrees. At each step, the algorithm searches for the state of system equilibrium using a series of elementary changes that are accepted if the energy system decreases. Smaller energy increments can be tolerated as the temperature decreases, and the device gradually falls into a low energy state. This property allows the algorithm to escape and close if the conditions are not similar to the global minimum from a local optimal configuration. The likelihood of acceptance and uphill movement depends on the temperature and the extent of the rise (Δ).
The algorithm begins by randomly choosing an initial solution (s) and then calculates the energy (Es) for s. After setting an initial temperature T, a neighbor finding strategy is used to produce a neighbor solution
The SA algorithm requires several decisions to be made. These decisions concern the choice of energy function, cooling schedule, neighborhood structure, and parameters for annealing (i.e., initial temperature, cooling factor, increasing repetition factor of the inner loop, and stopping condition). Each decision must be made carefully because it affects the speed of the algorithm and the consistency of the solutions.
The key to success for the SA algorithm is the energy function. It forms the energy landscape and influences how a solution is found by the algorithm. The energy function reflects the objective function to be optimized for an allocation problem. The neighborhood determines the process to switch from a solution point to another solution point.
The cooling schedule describes the temperature reduction mechanism when an equilibrium state is reached. The number of inner loop repetitions (nrep) and the cooling rate (
The annealing parameters are associated with the choice of initial temperature Initial temperature
Let Cooling factor χ: The cooling factor χ represents the rate at which the temperature Increasing factor β: The increasing factor β (>1) is the rate at which the internal number of repetitions increases as the temperature decreases. It is necessary to spend a long time at a lower temperature to assure that a local optimum has been thoroughly explored. The lower the temperature, the greater the number of repetitions of the inner loop. Stopping condition: The stop condition is expressed in terms of either the temperature parameter or the steady state of the system with the current solution. Other approaches describe a stopping state by designating several iterations or temperature thresholds that must have pass without approval. The simplest rule is to specify a total number of iterations and then stop when this number has been completed. In this paper, the final temperature is chosen to give a low acceptance value.
The different steps of SA are summarized in the following pseudo-code:
PSO was initially proposed and designed by [
The different steps of PSO are summarized in the following pseudo-code:
Based on the concept of biogeography, BBO mimics the distribution of species that depend on rainfall, diversity of topographic features, temperature, land area, etc [
The migration (i.e., immigration or emigration) of species in a single habitat is explained in
The first stage of the BBO algorithm is to assign the parameter values. These parameters are maximum species count (
In the BBO algorithm, each individual or decision variable is considered to be a “habitat.” An HSI represents the measurement of an individual, and the length of the habitat depends on the number of decision variables. This HSI is a measure of the goodness of fit of the solution and is equivalent to the fitness value in population-based optimization algorithms. The corresponding variables of individuals that characterize the habitability feature are called suitability index variables (SIVs). These are randomly initialized within the maximum and minimum limits by
Migration is an adaptive process that is used to change an existing habitat's SIV. Each individual has its own immigration rate (λ) and emigration rate (μ) to improve the solution and thus evolve a solution to the optimization problem. This solution is modified on the basis of its probability (
High-HSI habitats resist change more than low-HSI habitats. Weak solutions consider more helpful knowledge from successful solutions that increase the algorithm's ability to exploit. The immigration and emigration rates are initially based on the number of species in the habitat and are calculated by
Besides migration, BBO includes a mutation feature. Mutation is used to improve population diversity to obtain the best solution. Low- and high-HIS-valued habitats have less possibility to mutate than average-HIS-valued habitats. For instance, let us assume that there are a total of
The different steps of BOO are summarized in the following pseudo-codes (Algorithms 7–9).
AISO was first introduced by Rechenberg [
The immune system's main role is to recognize and remove intruders so that the immune system has the potential for self/non-self-discrimination. As previously discussed, different antibodies can be produced and specific antigens can be recognized. The antibody-recognized part of the antigen is called the epitope, which functions as an antigen determinant. Every type of antibody has its own specific antigen determinant, which is called an idiotope. Additionally, an activated lymphocyte needs to proliferate and then differentiate into effector cells to generate sufficient unique effector cells to counter an infection. This approach is called clonal selection [
From this viewpoint, an antibody and an antigen can be viewed as the solution and objection function, respectively, to solve the problem of reliability–redundancy allocation optimization.
The different steps of an AIS are summarized in the following pseudo-code:
CS was introduced by [ Each cuckoo lays one egg at a time and deposits it in a randomly picked nest. The best nests with high-quality eggs (i.e., solutions) will be passed on to the next generations. The number of available host nests is set, and an alien egg can be discovered by a host with a probability of pa ∈ [0,1]. In this scenario, the host bird may either throw the egg away or leave the nest to establish an entirely new nest in a new location.
For simplicity, this last statement can be approximated by the fraction of nests that are replaced by new nests with new random solutions. As with other evolutionary methods, the solution's fitness function is described similarly. In this process, an egg already present in the nest will represent the previous solution, whereas the cuckoo's egg represents the new solution. The goal is to replace worse solutions that are in the nests with new and theoretically better solutions (cuckoos). The basic steps of a cuckoo search are defined in Algorithm 11 on the basis of these three rules.
The new solution
In Mantegna's algorithm, the step length is calculated by
The steps of a CS are summarized in the following pseudo-code:
The ICA was recently introduced by [
One characteristic of the imperialism policy is that over time, colonies start to change their culture to be more similar to their respective imperialist. In ICA, this process is implemented by moving the colonies toward their imperialist in the search space domain and is called assimilation. During this event, a colony may become more powerful than its imperialist. In this case, the colony takes the place of the imperialist and the imperialist becomes the colony.
Imperialistic competition is ruled by the fact that the most powerful empires tend to increase their power, whereas the poorest ones tend to collapse. This phenomenon makes the number of imperialistic countries decrease during the run of the algorithm and the colonies of the defeated imperialist become part of another empire.
These two mechanisms force the algorithm to converge into a single empire, in which the imperialist and the colonies will have the same culture. Translating the ICA's metaphor into optimization language, the culture is a possible solution and the power is used to measure a country's objective function evaluation.
ICA starts by the initialization process (i.e., initialization of the empires) and then the medialization of the movement of colonies toward the imperialist (i.e., assimilation), the evaluation of the total cost of the empire, and finally the realization of imperialist competition.
In ICA, the countries are the possible solutions in the search space. They are represented by a 1xNvar dimension array defined by
In the beginning, an initial population is randomly created from a uniform distribution and it is divided into imperialists and colonies. Since ICA is a maximization algorithm, the imperialists will be the countries with the highest costs. Since the proposed problems are maximization problems, the cost of a country is given by the inverse of the cost function
The colonies will be distributed among the imperialists concerning the power of each imperialist. For this purpose, the normalized cost is defined by
Finally, the normalized power for each imperialist is defined by
The assimilation process (i.e., movement of colonies toward the imperialist) depends on the distance between the colonies and their imperialist, a real constant called an “assimilation coefficient,” and a random number in the range [0,1]. The following equation represents the new position of a colony:
After this movement, a colony may reach a position with a higher cost than the imperialist. In this case, the colony will become the new imperialist, whereas the old imperialist will become a colony of the same empire.
The cost of an empire is most affected by the imperialist cost, although it is also affected by the colonies’ costs. This event is stated by
The imperialistic competition comprises disputes between empires to possess the colonies. This event increases the powers of the most powerful empires, whereas the poorest empires tend to decrease their powers. Imperialistic competition is proposed by choosing the poorest colony (the colony with the higher cost) from the poorest empire to be disputed by the other empires. For this purpose, it is assumed that
This creates the vector
The vector
Finally, the empire chosen to possess the colony is the one that has the highest value of
Different steps of ICA are summarized in the following pseudo-code:
With the aid of an EA, several approaches have been suggested to deal with the constraints for solving restricted optimization problems. The penalty function method is a very common technique in RRAP. With this method, the restricted optimization problem is translated into an unrestricted one. Here, the reduced objective function contains the original objective function, and a penalty is imposed for breaching the constraints. Recently, a penalty function approach to handle the constraints has been proposed [
And
For a minimization problem, instead of −M, +M is used. In this work, we used the value of 99,999 for M.
Besides these “classical” problems, we consider a fourth benchmark problem for RRAP. This problem represents the overspeed protection system for a gas turbine (
Subsystem | |||||||
---|---|---|---|---|---|---|---|
1 | 2.33 | 1.5 | 1 | 7 | 110 | 175 | 200 |
2 | 1.45 | 2 | 8 | ||||
3 | 0.54 | 3 | 8 | ||||
4 | 8.05 | 4 | 6 | ||||
5 | 1.95 | 2 | 9 |
Subsystem | |||||||
---|---|---|---|---|---|---|---|
1 | 2.50 | 1.5 | 2 | 3.5 | 180 | 175 | 100 |
2 | 1.45 | 4 | 4 | ||||
3 | 0.54 | 5 | 4 | ||||
4 | 0.54 | 8 | 3.5 | ||||
5 | 2.10 | 4 | 4.5 |
Subsystem | |||||||
---|---|---|---|---|---|---|---|
1 | 1.0 | 1.5 | 1 | 6 | 250 | 400 | 500 |
2 | 2.3 | 2 | 6 | ||||
3 | 0.3 | 3 | 8 | ||||
4 | 2.3 | 2 | 7 |
To compare the performance of each EA results, we define the maximum possible improvement (MPI) index [
Generally, even if the improvement in the proposed approaches seems extremely small, small improvements in reliability are often hard to achieve in high-reliability applications. The best results are obtained by the latest developed approaches such as BBO (2015), CA (2016), and ICAA (2016). Recently, new approaches combine different EAs sequentially or in parallel.
GA [ |
BCO [ |
SA [ |
PSO [ |
BBO [ |
AISO [ |
CA [ |
ICA [ |
|
---|---|---|---|---|---|---|---|---|
.931578 | .93167841 | .931460 | .8885037 | .93168238 | .931678 | .93168210 | .93168138 | |
3 | 3 | 3 | 2 | 3 | 3 | 3 | 3 | |
.779427 | .779403 | .782391 | .800592 | .779405 | .779266 | .779439 | .779760 | |
2 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | |
.869482 | .871833 | .866712 | .740493 | .871833 | .872513 | .871995 | .872088 | |
2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | |
.902674 | .902886 | .901747 | .829143 | .902881 | .902634 | .902873 | .902689 | |
3 | 3 | 3 | 4 | 3 | 3 | 3 | 3 | |
.714038 | .711398 | .717266 | .636861 | .711401 | .710648 | .711127 | .711405 | |
3 | 3 | 3 | 2 | 3 | 3 | 3 | 3 | |
.786896 | .787808 | .783795 | .887042 | .787805 | .788406 | .787986 | .787208 | |
0.15 | 0.01 | 0.32 | 38.73 | Best | 0.01 | ≈ 0 | ≈ 0 |
GA [ |
BCO [ |
SA [ |
PSO [ |
BBO [ |
AISO [ |
CA [ |
ICA [ |
|
---|---|---|---|---|---|---|---|---|
.99997418 | .99997661 | .99997631 | .99985845 | .99997664 | .99997658 | .99997664 | .99997661 | |
2 | 2 | 2 | 4 | 2 | 2 | 2 | 2 | |
.785452 | .822437 | .812161 | .840252 | .819658 | .812485 | .819483 | 0.822012 | |
2 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | |
.842998 | .842382 | .853346 | .888650 | .844910 | .843155 | .844783 | .843656 | |
2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | |
.885333 | .897571 | .897597 | .623750 | .895487 | .897385 | .895810 | .891290 | |
2 | 2 | 2 | 1 | 2 | 2 | 2 | 2 | |
.917958 | .8918627 | .900710 | .939849 | .895514 | .894516 | .895220 | .898698 | |
4 | 4 | 4 | 2 | 4 | 4 | 4 | 4 | |
.870318 | .8685979 | .866316 | .751586 | .868468 | .870590 | .868542 | .868249 | |
9.53 | 0.13 | 1.39 | 83.50 | Best | 0.26 | Best | 0.13 |
GA [ |
BCO [ |
SA [ |
PSO [ |
BBO [ |
AISO [ |
CA [ |
ICA [ |
|
---|---|---|---|---|---|---|---|---|
.99987916 | .99988959 | .99988764 | .99988957 | .99988963 | .99988921 | .99988963 | .99988963 | |
3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | |
.814090 | .827222 | .807263 | .826678 | .828060 | .812485 | .827855 | .827642 | |
3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | |
.864614 | .856301 | .868116 | .857172 | .858040 | .867661 | .857626 | .857478 | |
3 | 2 | 3 | 2 | 2 | 3 | 2 | 3 | |
.980291 | .914575 | .872862 | 0.914629 | .914148 | .861221 | .914752 | .914196 | |
3 | 4 | 3 | 4 | 4 | 3 | 4 | 3 | |
.701190 | .651220 | .712673 | 0.648918 | .647968 | .713852 | .648217 | .649273 | |
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
.734731 | .701774 | .751034 | 0.715291 | .704204 | .756699 | .702670 | .704092 | |
8.66 | 0.04 | 1.77 | 0.05 | Best | 0.38 | Best | Best |
GA | BCO [ |
SA [ |
PSO [ |
BBO [ |
AISO [ |
CA [ |
ICA [ |
|
---|---|---|---|---|---|---|---|---|
- | .99995467 | .999945 | .99995300 | .99995467 | .999942 | .99995467 | .99995467 | |
- | 5 | 5 | 5 | 5 | 5 | 5 | 5 | |
- | .901840 | .895644 | .902231 | .901562 | .903800 | .901598 | .901489 | |
- | 5 | 5 | 6 | 5 | 5 | 5 | 6 | |
- | .888208 | .885878 | .856325 | .888224 | .874992 | .888226 | .850035 | |
- | 4 | 5 | 4 | 4 | 5 | 4 | 4 | |
- | .948134 | .912184 | .948145 | .948155 | .919898 | .948101 | .948129 | |
- | 6 | 5 | 5 | 6 | 5 | 6 | 5 | |
- | .849942 | .887785 | .883156 | .849952 | .890609 | .849980 | .888238 | |
- | Best | 17.58 | 3.55 | Best | 21.84 | Best | Best |
Nowadays, the modern complex system reliability optimizations arise a certain challenges linked to ‘pragmatism’ by dealing with more realistic reliability behaviors of the components. Indeed, the previous problem formulations and assumptions are considering notions like the active redundancy, perfect switching of redundant components, binary behavior of components/systems,… which make the ideal problem far away from the realistic case.
For example, considering the ideal binary state (functioning/failed) for components and systems is not a realistic description of the multi-state system reliability behavior [
Moreover, with active redundancy assumption, the failure time of a parallel subsystem of components is the maximum of individual component failure times, and the reliability is defined by standard probability approaches [
System designs with active redundancy have fully activated components that, in the event of a primary component failure, will continue to provide the requisite design functions until all redundant components have also failed. The use of non-activated components that can be turned on in response to a primary component failure includes cold standby redundancy. Cold standby redundancy includes failure detection switches and redundant unit activation. When active or cold-standby redundancies can be selectively selected for individual subsystems, researchers prefer to optimize reliability [
Series problem | Series-parallel problem | Bridge problem | Overspeed gas turbine problem | |
---|---|---|---|---|
CDEPSO [ |
GA-PSO [ |
GA-PSO [ |
CDEPSO [ |
|
.93168239 | 0.99999988 | 0.99999952 | .99995467 | |
3 | 4 | 4 | 5 | |
.779398 | 0.860328 | 0.858430 | 0.90151342 | |
2 | 3 | 3 | 5 | |
.871836 | 0.827071 | 0.700000 | 0.88811463 | |
2 | 2 | 3 | 4 | |
.902886 | 0.872994 | 0.922386 | 0.94838428 | |
3 | 3 | 3 | 6 | |
.711402 | 0.937884 | 0.700000 | .84991061 | |
3 | 2 | 1 | - | |
.787799 | 0.701148 | 0.700000 | - | |
≈ 0 | 99.49 | 99.57% | 0 |
Another weakness of the idealization reliability assessment is the certainty approach considering the input parameter values. Intrinsic uncertainty and imperfect knowledge of system behavior, however, are still present. As aspects, factors and causes of uncertainty, we note lack of information and knowledge, model approximation, conflicting nature of information, measurement errors, … [
System reliability optimization remains an ongoing topic of scientific improvement. New methods and algorithms are produced on the basis of mathematical advancements and new optimization approaches. This paper presented an overview of reliability–redundancy allocation problems with a focus on series, series–parallel, bridge, and overspeed gas turbine systems. EAs used for reliability–redundancy allocation problems were reviewed, along with their mathematical background and pseudo-codes detail. We reviewed EA on the basis of mimicry of social behavior of mankind and animals, biological interactions, and even the behavior of cooled molten metal (i.e., GAs, ant colony optimization, BCO, SA, PSO, BBO, Tabu Search, AISO, CA, and ICA).
RRAP optimization was investigated by GAs in 1995. The literature review presented in this paper summarized the best found results for each EA. All the cited techniques are promising and viable tools to solve RRAP. However, recent techniques such as the BBO (2015), CA (2016), and ICA (2016) generate the best results. Additionally, recent investigations tend to merge different techniques and sometimes generate better results than the use of a single algorithm.
Finally, we discussed current issues that require more complex systems with far more practical component/system reliability behaviors and that contain multi-state, uncertain data. Nevertheless, new perspectives are emerging, and effective approaches to addressing new challenges and more complex systems will need to be sought.