iconOpen Access

ARTICLE

An Adaptive Imperialist Competitive Algorithm with Cooperation for Flexible Jobshop and Parallel Batch Processing Machine Scheduling

Jie Wang, Deming Lei*

School of Automation, Wuhan University of Technology, Wuhan, China

* Corresponding Author: Deming Lei. Email: email

(This article belongs to the Special Issue: Swarm-Based Optimization and its Cross-Disciplinary Applications in Modern Engineering)

Computers, Materials & Continua 2026, 87(3), 79 https://doi.org/10.32604/cmc.2026.076202

Abstract

Both flexible jobshop scheduling and parallel batch processing machine scheduling have been extensively considered; however, the flexible jobshop and parallel batch processing machine scheduling problem (FJPBPMSP) is prevalent in real-life manufacturing processes and is seldom investigated. In this study, FJPBPMSP is examined, where flexible processing and batch processing are performed sequentially. An adaptive imperialist competitive algorithm with cooperation (CAICA) is proposed to minimize makespan and total energy consumption simultaneously. In CAICA, a four-string representation is adopted, and initial empires with novel structures are formed by uniformly dividing the population. An adaptive assimilation and revolution are designed. An adaptive assimilation and revolution are designed. An adaptive imperialist competition with cooperation is provided. Search strategies, imperialists, and colonies are also renewed by new procedures. Computational experiments are conducted on 50 instances. The computational results show that the new strategies of CAICA are effective, and CAICA can provide better results than its comparative algorithms in solving FJPBPMSP.

Keywords

Flexible jobshop scheduling; parallel batch processing scheduling; imperialist competition algorithm; cooperation

1  Introduction

Flexible production is often used, and a flexible jobshop consists of M machines, and at least one operation can be processed by more than one machine. Flexible jobshop scheduling problem (FJSP) has been widely applied in many industries such as automobile assembly, textile, chemical material processing, and semiconductor manufacturing, etc., and consists of two main sub-problems: machine assignment and scheduling. In the past decades, FJSP has been considered extensively, and a number of results have been obtained.

Batch processing is a typical production mode and has been widely applied in manufacturing processes of textile, chemical, mineral, pharmaceutical, and semiconductor, etc. Batch processing machines (BPM) can simultaneously process multiple jobs in a batch. Parallel batch and serial batch are often considered; the processing time of the former is the maximum processing time of all jobs in a batch, and the processing time of the latter is the sum of the processing time of all jobs in a batch. The parallel BPM scheduling problem is a typical BPM scheduling problem and has attracted much attention.

As shown above, FJSP and parallel BPM scheduling problems have been studied fully; however, as the combination of FJSP and BPM scheduling, the flexible jobshop and parallel batch processing machine scheduling problem (FJPBPMSP) is seldom considered. In fact, this problem exists in electronic product performance testing [1] and transformer production [2], etc. There are typically two types of FJPBPMSP. The first one is FJSP with some BPMs, and the second consists of a flexible jobshop and parallel BPMs. Single job processing is first done in the flexible jobshop, and then batch processing is executed on parallel BPMs. The second one often occurs in the transformer production process, etc., and is hardly considered.

The imperialist competitive algorithm (ICA) is inspired by the historical phenomena of imperialism and colonialism and has good neighborhood search ability, effective global search property, good convergence rate, and flexible structure. In recent years, ICA has been extensively applied to solve various production scheduling problems, such as FJSP [3,4] and parallel BPM scheduling [5]; however, ICA is not used to solve FJPBPMSP. ICA has successfully been applied to solve FJSP and parallel BPM scheduling, and results revealed that ICA has strong search advantages in solving the above problems. FJPBPMSP is the integration of FJSP and parallel BPM scheduling. The advantages of ICA for FJSP and parallel BPM scheduling show that ICA has potential advantages in handling FJPBPMSP, so ICA is chosen in this study.

The prior research on FJPBPMSP is mainly about the first one, with the minimization of makespan, and multi-objective FJPBPMSP is seldom considered. On the other hand, only one paper [2] is related to the second type of FJPBPMSP, which often exists in transformer production, etc., so it is necessary to handle the second type of FJPBPMSP with multi-objectives.

The main contributions of this paper are summarized as follows:

(1) A multi-objective FJPBPMSP and a mixed-integer linear programming (MILP) model are developed. (2) An adaptive imperialist competitive algorithm with cooperation (CAICA) is proposed to minimize makespan and total energy consumption simultaneously. To produce high quality solutions, a four-string representation is employed to encode the integrated decisions of machine assignment, operation sequencing, batching, and batch scheduling; an empire initialization strategy with uniform division is devised to enhance population diversity; adaptive assimilation and revolution operators are introduced to achieve balanced exploration and exploitation; a cooperative imperialist competition mechanism is constructed to improve convergence stability and solution quality. (3) Computational experiments are conducted on 50 instances. The computational results demonstrate the effectiveness of new strategies and the promising performance of CAICA in solving multi-objective FJPBPMSP.

2  Related Work

The previous works on FJSP, parallel BPM scheduling, and FJPBPMSP are shown below.

2.1 FJSP

FJSP has attracted much attention since the pioneering works of Brucker and Schlie [6]. Dauzère-Pérès et al. [7] provided a survey paper, which recalled the related works on FJSP in the past 30 years according to criteria, constraints, configurations, and solution approach. As shown in the survey paper, exact approach [8], heuristic [9,10], and meta-heuristics are often used to solve FJSP. Meta-heuristics are the main method for FJSP because of its high complexity. FJSP has been solved by simulated annealing [11], tabu search (TS, [12]), variable neighborhood search (VNS, [13]), genetic algorithm (GA, [14]), particle swarm optimization (PSO, [15]), ant colony optimization (ACO, [16]), ICA [3], etc.

Reducing energy consumption or environmental impacts has become the main concern of manufacturing companies, and energy-efficient scheduling problems have been investigated fully in the past decade. As a typical energy-efficient scheduling, energy-efficient FJSP has attracted much attention. Luo et al. [17] presented an elaborately-designed multi-objective grey wolf optimization algorithm to solve the problem with variable processing speeds. Ebrahimi et al. [18] introduced four meta-heuristics to solve the problem with three machine states, facility energy cost, and job tardiness penalty. Li et al. [19] solved the problem with setup time and transportation by using an improved Jaya algorithm with problem-specific local search operators. Caldeira et al. [20] dealt with the problem of new job arrivals and turn-on/off strategy and presented an improved backtracking search algorithm.

Akram et al. [21] studied dynamical energy-efficient FJSP with insertion of new jobs and proposed a multi-objective black widow spider algorithm. Jiang et al. [22] solved the problem with two speed-adjustable resources by using a Q-learning-based biology migration algorithm. Zhao et al. [23] developed a double Q-learning-assisted competitive evolutionary algorithm for solving the problem with breakdown and job insertion. Hao et al. [24] presented a hybrid search genetic algorithm (HSGA) to deal with energy-saving FJSP. Xiao et al. [25] studied the production scheduling problem in the aerospace industry and proposed an improved decomposition-based multi-objective evolutionary algorithm (IMOEA/D). Luan et al. [26] presented an enhanced non-dominated sorting genetic algorithm-II (ENSGA-II) to solve energy-saving FJSP. Fan et al. [27] gave a knowledge-enhanced multi-objective memetic algorithm for the problem with a limited multi-load automated guided vehicle. Li and Lei [4] presented an ICA with feedback to solve the problem with transportation and setup times.

2.2 Parallel BPM Scheduling

The parallel BPM scheduling problem consists of some BPMs and n jobs. On each BPM, jobs are first grouped into batches, and then batches are processed. In the past decade, a number of results on parallel BPM scheduling have been obtained.

Uniform parallel BPM scheduling is often solved by mathematical programming [28], heuristic [29], and meta-heuristics such as ACO [30,31], GA [5], ICA [5] and differential evolution [32].

With respect to unrelated parallel BPM scheduling, Lu et al. [33] developed a hybrid artificial bee colony with TS to solve the problem with maintenance and deteriorating jobs. Zhou et al. [34] solved the problem with different capacities and arbitrary job sizes by a random key GA. A shuffled frog-leaping algorithm with VNS is presented for the problem with nonlinear processing time [35]. Xiao et al. [36] proposed a tabu-based adaptive large neighborhood search algorithm. Hu et al. [37] presented a mixed integer linear programming model and an adaptive large neighborhood search for the problem with two-dimensional packing constraints in additive manufacturing.

2.3 Flexible Jobshop and Parallel Batch Processing Machine Scheduling

Two types of FJPBPMSP are often considered. In the first problem, some operations of jobs can be processed on BPMs. For the first problem with all operations processed on BPM, Ham and Cakici [38] proposed an enhanced mixed integer programming (MIP) model, an MIP model with valid inequalities, and a constraint programming model, while Ham [39] formulated an MIP model with priority jobs. Ji et al. [40] proposed a multi-commodity flow model for small-scale problems and constructed an improved adaptive large neighborhood search algorithmic framework with an optimal repair and tabu-based components. Regarding the first problem, where a portion of operations are processed on BPMs, Zhang et al. [1] studied the problem that involves both mandatory and flexible batch processing operations, established an MIP model, and developed a GA enhanced with neighborhood search to minimize makespan. Wang et al. [41] presented an improved scatter search algorithm integrated with a batch job addition algorithm and simulated annealing-based local search. Xue et al. [42] established an MIP model and developed an enhanced multi-population GA with VNS and an immediate predecessor operation-based batching method to minimize makespan.

The second FJPBPMSP is hardly considered, in which n jobs are processed in a flexible jobshop and parallel BPMs sequentially. Song et al. [2] dealt with the problem by using discrete PSO with Q-learning algorithm, batch machine selection rule, and a segmented encoding method, and applied the proposed approach to solve the scheduling problem with makespan in transformer production.

3  Problem Description

FJPBPMSP is composed of n jobs J1,J2,,Jn and m+q machines. Machines M1,M2,,Mm are used for single job processing, and a flexible jobshop consists of these m machines. Mm+1,,Mm+q are BPM, and batch is a basic processing unit on them. The processing procedure of jobs has two phases: single job processing in a flexible jobshop and then batch processing on BPMs.

In a flexible jobshop, there exists at least one operation with two machines in Sij. On BPM Mk,km+1, Before batch processing is done, jobs are first allocated on BPMs; then on each BPM, batch is formed under capacity constraint, which means that the total weight of all jobs in each batch on Mk,km+1 cannot exceed Qk; finally, batches are processed in a given sequence on each BPM. Transportation time and energy consumption per unit transportation time are considered. Each machine has two operational modes: processing mode and idle mode.

There are some constraints on jobs and machines.

A job or a batch can be processed on at most one machine at a time.

A machine processes at most one job or one batch at a time.

Processing of the job or batch cannot be interrupted.

A job can only be assigned to one batch.

The problem is composed of five sub-problems, which are machine assignment and scheduling in a flexible jobshop, BPM assignment, batch formation, and batch scheduling on parallel BPM. Machine assignment and BPM assignment are used to select an Mk,km and a BPM Mk,k>m for each job Ji. Scheduling and batch scheduling are applied to decide the processing sequences of jobs and batches, respectively. Batch formation is about the formation of batches.

The definitions of the mathematical notations are listed in Table 1.

images

The MILP formulation is detailed below.

MIN Cmax=max{Cii=1,2,,n}(1)

MIN TEC=EC1+EC2+EC3(2)

s.t.

sij+k=1mxijkpijk=cij,i=1,,n, j=1,,hi(3)

kSijxijk=1,i=1,,n, j=1,,hi(4)

sij+pijksdg+I(1yijdgk),i,d=1,,n, j=1,,hi, g=1,,hl, kSijSlg(5)

sdg+pdgksij+Iyijdgk,i,d=1,,n, j=1,,hi, g=1,,hl, kSijSlg(6)

si,j+1cij+k=1ml=1mxijkxi,j+1,lTkl,i=1,,n, j=1,,hi1(7)

arilci,hi+k=1mxi,hi,kTkl,i=1,,n, l=m+1,,m+q(8)

k=m+1m+qEik=1,i=1,,n(9)

v=1nXivk=Eik,i=1,,n, k=m+1,,m+q(10)

i=1nwiXivkQkYkv,k=m+1,,m+q, v=1,,n(11)

PTkvbpikXivk,i=1,,n, k=m+1,,m+q, v=1,,n(12)

RTkvarikXivk,i=1,,n, k=m+1,,m+q, v=1,,n(13)

STkvRTkv,k=m+1,,m+q, v=1,,n(14)

STkvCTk,v1,k=m+1,,m+q, v=2,,n(15)

CTkv=STkv+PTkv,k=m+1,,m+q, v=1,,n(16)

Yk,v+1Ykv,k=m+1,,m+q, v=1,,n1(17)

CiCTkvXivk,i=1,,n, k=m+1,,m+q, v=1,,n(18)

where

EC1=k=1mpek(i=1nj=1hixijkpijk)+k=m+1m+qpek(v=1nYkvPTkv)(19)

EC2=k=1miek(C¯flei=1nj=1hixijkpijk)+k=m+1m+qiek(CmaxSTk1v=1nYkvPTkv)(20)

EC3=i=1nj=1hi1k=1ml=1mxijkxi,j+1,lTkletrans+i=1nk=1ml=m+1m+qxi,hi,kEilTkletrans(21)

Eqs. (1) and (2) specify two objective functions for minimization: makespan and total energy consumption. Constraint (3) calculates operation completion time. Constraint (4) assigns each operation to exactly one machine. Constraints (5) and (6) prevent operation overlap on the same machine. Constraint (7) considers transportation time between operations. Constraint (8) calculates the arrival time at batch machines. Constraint (9) assigns each job to one batch machine. Constraint (10) assigns jobs to batches. Constraint (11) enforces batch capacity limits. Constraint (12) determines batch processing time. Constraint (13) calculates batch arrival time. Constraint (14) ensures batch start time is after arrival. Constraint (15) sequences batches on the same machine. Constraint (16) calculates batch completion time. Constraint (17) maintains batch sequence continuity. Constraint (18) sets the job completion time as the batch completion time. Eqs. (19), (20), and (21) are the energy consumption of processing, idle time, and transportation, respectively.

Tables 2 and 3 show an example of the problem with 6 jobs, 3 machines for single job processing, and 2 BPMs. “–” means that Oij cannot be processed on the corresponding machine or BPM. w1=40, w2=38, w3=20, w4=47, w5=54, w6=43, Q4=80, Q5=110, pe1=3.99, pe2=4.00, pe3=2.77, pe4=7.42, pe5=8.73, iek=1, etrans=2.15. Fig. 1 gives a schedule of the example with Cmax=366, TEC=4704.78.

images

images

images

Figure 1: A schedule of the example.

In this scheduling example, the energy consumption process is as follows: Take job J4 as an example. Operation O41 is processed on machine M1 from time 0 and completed at time 65; the corresponding processing energy consumption is 259.35. Then, it is transported to machine M3 for processing operation O42, T13=9, the corresponding transportation energy consumption is 19.35. M1 is idle after processing O41 and before processing O22, with a corresponding idle time of 3 and energy consumption of 3.

4  CAICA for FJPBPMSP

FJPBPMSP has many sub-problems, and each of them is NP-hard. To solve FJPBPMSP, a novel ICA is developed by using new structures of empires, adaptive assimilation, revolution, and the combination of competition and cooperation of empires.

4.1 Initialization and Initial Empires Formation

The problem is the integration of FJSP and the parallel BPM scheduling problem, and it has five sub-problems. Four-string representation is used. For the problem with n jobs, m single job processing machines and q BPMs, its solution is represented as scheduling string (θ1,θ2,,θh), machine assignment string (Mσ11,Mσ12,,Mσ1h1,Mσ21,,Mσnhn), BPM assignment string (Mν1,Mν2,,Mνn) and BPM scheduling string (π1,π2,,πn), where h=i=1nhi, θi{1,2,,n}, in scheduling string, the number of 1 is h1, the number of 2 is h2, 3 occurs h3 times and so on. Mσij{M1,M2,,Mm} is processing machine of Oij, Mνi{Mm+1,Mm+2,,Mm+q} is used for Ji, πi{1,2,,n}.

The decoding procedure is described as follows.

(1) Convert scheduling string (θ1,θ2,,θh) into an ordered operation list ((θ1,r1),(θ2,r2),,(θh,rh)) [43] in which (θi,ri) indicates operation Oθiri, then start with Oθ1r1, allocate the processing of Oθiri on machine decided by machine assignment string sequentially, after Oθiri is processed, then transfer job Jθi to the machine for Oθi(ri+1), if ri=hθi, then transfer job Jθi to Mνθi.

(2) On each BPM Ml,l>m, decide all jobs assigned on Ml according to the BPM assignment string and obtain a sub-string of all jobs on Ml in terms of the BPM scheduling string, then form batches Bl1,Bl2, by using the sub-string under capacity constraint; finally, process all batches on Ml by the first-formed first-processing rule.

The first-formed first-processing rule means that the processing sequence of batches is their formation sequence.

Suppose the sub-string of Ml is (π1,π2,,πτl). τl indicates the number of all assigned jobs on Ml.

Batch formation on Ml is described as follows.

(1) Construct batch Bl1, let Wl1=0, bl=1.

(2) For (π1,π2,,πτl), repeat the following steps until all jobs of the sub-string are allocated into batches: start with the first job of the sub-string, sequentially select jobs of the sub-string and add them into Bl1 until capacity constraint is violated, then delete all jobs of Bl1 from the sub-string, bl=bl+1, construct a batch Blbl, let Wlbl=0.

For the example shown in Tables 2 and 3, a solution is composed of scheduling string [6 3 2 4 4 5 2 5 3 6 1 5 4 6 3 1 5 2 1 6], machine assignment string [2 2 3 3 1 2 2 2 1 1 3 2 2 2 3 3 3 1 3 1], BPM scheduling string [3 4 6 1 5 2] and BPM assignment string [4 4 4 4 5 5]. After the processing in flexible jobshop is finished, jobs J3,J4,J1,J2 are moved to M4, J5,J6 are transferred to M5. On M4, its sub-string is (J3,J4,J1,J2), construct B41, W41=0, add J3 into B41, B41={J3}, W41=20, then J4 is included into B41, W41=67, when J1 is considered, W41+w1>Q4, so J1 cannot be added into B41, then B42 is formed using the same step, B42={J1,J2}. Because B41,B42 are formed sequentially, the processing sequence is B41,B42. On M5, its sub-string is (J5,J6). Following the same batching logic, batch B51 is constructed starting with W51=0. J5 is first added into B51, resulting in B51={J5} and W51=54. Subsequently, when J6 is considered, since W51+w6<Q5, it is also included into B51, completing the batch formation on M5.

Initial population P consists of N initial solutions and is generated in the following way:

(1) For each xiP, randomly produce a scheduling string, a BPM assignment string, and a BPM scheduling string.

(2) The strategy [44] is used to produce a machine assignment string of all solutions.

With respect to the strategy [44], a machine assignment string of 0.4N initial solutions is generated by the global selection method, a machine assignment string of 0.5N solutions is produced by the local selection method, and a machine assignment string of 0.1N solutions is randomly generated.

Initial empires are formed below.

(1) Execute non-dominated sorting and crowding distance computation [45], compute normalized cost c¯xi for each xiP, select 2Nim solutions with the biggest normalized cost as imperialists, which are IM1,IM2,,IM2Nim, sort all imperialist in the descending order of normalized cost, suppose that c¯IM1c¯IM2c¯IM2Nim, then for each empire k, IMk and IM2Nim1+k are added into it.

(2) For N2Nim colonies x1,x2,,xN2Nim, form an uniformly random permutation by Fisher-Yates shuffle algorithm [46], suppose the permutation is (1,2,,N2Nim); then allocate each xi into empire imodNim, where imodNim indicates the remainder of i/Nim.

c¯xi=maxxtP{rankxt}rankxi+distxi/xtΘrankxidistxt(22)

where c¯xi is the normalized cost of xi, rankxi and distxi are rank value and crowding distance of xi decided by non-dominated sorting, Θrankxi is the set of solutions with rank value of rankxi.

There are Nim empires, each of which has new structures including two imperialists and (N2Nim)/NimNim colonies, that is, the whole population is divided uniformly into Nim empires. In the existing ICAs [4750], the population is often divided non-uniformly. When two imperialists are used, colonies have more selection in assimilation, and exploration ability can be extended. If only one imperialist is applied, the diversity of the empire will diminish greatly if the imperialist cannot be updated over several generations, so two imperialists are chosen.

4.2 Search Strategy

Two global search operators are given, which are precedence-preserving order-based crossover (POX) [26] and uniform crossover [25]. When POX is performed on solutions x,y, parents 1,2 are formed by using the scheduling string and the BPM scheduling string of x,y, respectively, in two parents, the first part is the scheduling string and the second is the BPM scheduling string, then POX is performed on the first part and the second part sequentially, two offspring are generated. Fig. 2 gives an example of POX. When uniform crossover is used for x,y, two parents are obtained by using the machine assignment string and the BPM assignment string, then two offspring are generated.

images

Figure 2: Example of POX.

Six neighborhood structures are denoted as 𝒩1𝒩6. 𝒩1 is defined as follows: the swap operator is first applied to the scheduling string, and then to the BPM scheduling string. When the swap operator of 𝒩1 is replaced with the insertion operator and inversion operator respectively, 𝒩2 and 𝒩3 are obtained. 𝒩4 and 𝒩5 act on the machine assignment string. Specifically, 𝒩4 operates as follows: randomly select an operation Oij satisfying |Sij|>1, randomly choose a machine MkSij where MkMσij, and set Mσij=Mk. 𝒩5 is defined as: select the machine Mk (km) with the maximum completion time, then choose an operation Oij such that |Sij|>1 and MkSij, and set Mσij=Mk. 𝒩6 acts on the BPM assignment string: determine the BPM Mk with the maximum workload, randomly select a job processed on Mk, and move it to the BPM with the minimum workload.

For each solution xi on generation gen, its search strategy SOigen={GS,𝒩υ}, GS is one of POX and uniform crossover, 1υ6. Initially, GS and 𝒩υ are randomly chosen from POX and uniform crossover, 𝒩1𝒩6, respectively.

Search process 1 for xiP,y is described below: let e=0, repeat the following process three times; if e<2, then execute GS of SOigen on xi,y, obtain offspring 1, 2, then generate z1,z2 by combining offspring 1, 2 and other two strings of xi,y, if z1xi, then xi=z1 and update Ω with xi, if z2xi, then xi=z2 and update Ω with xi, if xi is renewed, then e=2; otherwise produce z3𝒩υ(xi) and update xi and Ω with z3 like z1,xi.

Where Ω is external archive and updated as follows: add xi into Ω, compare all solutions of Ω and delete all dominated solutions from Ω. 𝒩l(x) is the set of neighborhood solutions of x produced by 𝒩l.

For xi,xjP, suppose c¯xi>c¯xj.

Search process 2 for xi,xjP and y1,y2 is shown as follows:

(1) Execute GS of SOigen between xi,y1 like xi,y of search process 1, obtain z1,z2 and update xj with z1,z2 as done for z1,z2,xi in search process 1, then produce z3 and renew xj with z3 as done in search process 1.

(2) Perform GS of SOjgen between xj,y2 of SOjgen and obtain solutions z1,z2, if xixj, then update xj with z1,z2; otherwise renew xi with z1,z2.

4.3 Adaptive Assimilation and Revolution

Assimilation is the main step for producing new solutions. In the assimilation process of empire k, the imperialist is used as the learning object of colonies, each colony moves ε along e direction toward its imperialist. The moving distance ε is a random number gotten by random distribution in the interval [0,s×e], where s(1,2) and e is the distance between the colony and the imperialist. A deviation from the direct line is also used. The deviation is represented by ν, which follows a uniform distribution in [ψ,ψ], where ψ is an arbitrary parameter.

In general, only one imperialist as a unique learning object exists in an empire, and each colony moves to its imperialist in the same way. In this study, an adaptive assimilation is given, in which two learning objects of each empire are used adaptively, and the search strategy of each colony is determined dynamically.

Revolution is similar to the mutation of GA and is applied to increase exploration and prevent the early convergence to local optima. Revolution is shown as follows. In each empire k, for each colony xiTk, if a random number rand<UR, then perform neighborhood search on x, where rand follows a uniform distribution on [0, 1] and Tk is the set of all colonies of empire k. In this study, UR is not used, and only two solutions of empire k are used.

Adaptive assimilation and revolution on generation gen are described as follows.

(1) Calculate c¯xi of each xiP, normalized total cost NTCk for each empire k and average normalized total cost ANTC of all empires.

(2) For each empire k,

1) Decide αk colonies with the biggest c¯xi and define Φk as the set of these colonies, for each xiΦk, randomly choose yΩ, execute search process 1 according to search strategy SOigen, if xi is updated in the process 1, then traili=0; otherwise, traili=traili+1.

2) For each xiTkΦk, if tagi=1(2), then y=IMk(IM2Nim1+k), if tagi=0, then let y be the randomly chosen one from IMk,IM2Nim1+k and let tagi=1(2) if y=IMk(IM2Nim1+k), execute the search process 1 and update traili like step 1). if xi is not renewed, then tagi=0.

(3) For each empire k, if NTCkANTC, then select two colonies with the biggest normalized cost as for revolution; otherwise, use two imperialists for revolution, for each chosen solution, repeat the following step three times: produce solution z by neighborhood structure of its search strategy and update the chosen solution with z as done for z3,xi in search process 1.

Where traili and tagi are integers about xi, initially, traili=tagi=0, traili is varied like 1) of step (2). tagi is used and renewed as done in 2) of step (2).

NTCk=c¯IMk+c¯IM2Nim+ 1 - k+ξxiTkc¯xi/((N2Nim)/Nim)(23)

αk=φ×(0.2+0.8genmax_gen)×TPmaxTPkTPmaxTPmin+ε×(N2Nim)/Nim(24)

TPk=NTCkNTC~k(25)

ANTC=1Nimk=1NimNTCk(26)

where ξ is a real number and often set to be 0.1, φ is real number, ε>0 is close to 0 to avoid TPmaxTPmin being equal to 0, NTC~k indicates the normalized total cost computed in the above step (1) in generation gen1, max_gen is maximum number of generations.

In step (3), NTCkANTC means that colonies of empire k have low solution quality and imperialists may not be renewed in the last generations; in this case, only the two best colonies are used for revolution to avoid the waste of computing resources on some worse colonies. If NTCk>ANTC, then imperialists may be updated continuously in the last generations; in this case, imperialists are applied for revolution in generation gen.

After assimilation and revolution are performed, two imperialists are updated in the following way: in each empire k, let z1=IMk,z2=IM2Nim+ \!1 \!- \!k, compute c¯xi for each xi of empire k, decide two solutions x1,x2 with the biggest normalized cost, let IMk=x1,IM2Nim+ \!1\! - \!k=x2; if z1,z2 are identical with IMk,IM2Nim+ \!1\! - \!k, then tagi is not varied for each xiTk, if one of z1,z2 is equal to IMk or IM2Nim+ \!1\! - \!k, if z1(z2) is still imperialist, then tagi=2(1) for each xiTk; if z1,z2 are not equal to IMk and IM2Nim+ \!1\! - \!k, then let tagi=0 for each xiTk.

As shown above, tagi is 0 for each xiTk, this fact means that two imperialists are updated, so one of them is randomly chosen in 2) of step (2) in adaptive assimilation. When tagi>0, one imperialist is not varied, and a new imperialist of empire k is chosen in assimilation.

4.4 Adaptive Imperialist Competition with Cooperation

In ICA, imperialist competition is often implemented as follows: compute total cost and NTCk for each empire k, then calculate the power EPk for each empire k and construct a vector [EP1rand1,EP2rand2,,EPNimrandNim] and choose an empire k with the biggest EPkrandk; finally, move the weakest colony from the weakest empire to empire k. In this study, an adaptive imperialist competition with cooperation is given, where randk follows a uniform distribution in [0, 1].

EPk=|NTCk/l=1NimNTCl|(27)

Adaptive imperialist competition is described as follows.

(1) Calculate c¯xi of each xiP, normalized total cost NTCk for each empire k, sort all empires in the descending order of NTCk, suppose that NTC1NTC2NTCNim.

(2) If NTC1/NTCNimβ, then construct a vector and choose an empire k with the biggest EPkrandk as done above, decide a xjTNim with the biggest trailj and the smallest normalized cost, choose a xiTk with the smallest normalized cost, then swap xj,xi, let tagi=tagj=0.

(3) If NTC1/NTCNim>β, then

1) Sort all colonies of empires 1, Nim in the descending order of normalized cost sequentially.

2) ω=1, repeat the following steps until ω=(N2Nim)/Nim: suppose xiT1 and xjTNim are solution with ranking of ω in empires 1 and Nim respectively, determine y1 in empire 1 and y2 in empire Nim according to tagi,tagj as done in 2) of step (3) in assimilation, execute search process 2 for xi,xj,y1,y2.

3) Perform imperialist competition of step (2) for the remaining Nim2 empires.

In 2) of step (3), cooperation between the strongest empire 1 and the worst empire Nim is performed by executing search process 2 between xi,xj, while imperialist competition is performed among all empires.

4.5 Algorithm Description

The steps of CAICA are shown below.

(1) Generate initial population P, construct initial Ω, let gen=1, for each xiP, set initial SOigen.

(2) Form initial empires.

(3) Perform adaptive assimilation and revolution.

(4) Renew imperialists of each empire.

(5) Execute adaptive imperialist competition with cooperation.

(6) Update each colony xiTk with trailiρ in each empire.

(7) gen=gen+1, if the stopping condition is not met, then go to (3), otherwise, terminate the search.

Fig. 3 shows the flow chart of CAICA. In each empire k, traili>ρ means that xi cannot be renewed in the last ρ generations. If traili>ρ, then xi is replaced with a randomly generated solution, and then search process 1 is done for xi and a randomly chosen solution yΩ.

images

Figure 3: Flow chart of CAICA.

Unlike the previous ICA [4750], CAICA incorporates several novel features: (1) Each empire is assigned two imperialists; (2) Adaptive assimilation and revolution are introduced, in which colonies adaptively select one for assimilation based on their tag value, and two imperialists in each empire are updated on each generation; (3) Adaptive imperialist competition with cooperation is proposed.

5  Computational Experiments

Extensive computational experiments are conducted to test the performance of CAICA for FJPBPMSP. All experiments are implemented using Visual Studio 2022 and run on a PC with 8 GB RAM and a 1.8 GHz CPU.

5.1 Instances, Comparative Algorithms and Metrics

FJPBPMSP is seldom considered, and no existing instances can be used. To construct instances for FJPBPMSP, instances Mk01-Mk10 [51], and La01-40 [52] are extended by adding BPMs, and the extended Mk01-Mk10 and La01-La40 are labelled as mMk01-mMk10 and mLa01-mLa40. The added data are shown below: q[2,4],wi[20,60],Qk(k>m)[80,120],Tkl[5,10], pek(k<m+1)[2,4],pek (k>m)[6,12],etrans=1,iek=1.

The coverage metric 𝒞 [53] is applied to compare the approximate Pareto optimal set, respectively, obtained by algorithms.

𝒞(L,B)=|{bB: hL, hb}||B|(28)

Metric HV [54] is used to calculate the hypervolume of the non-dominated solution set ΩA and the target space area covered by reference point Z.

HV=xΩA|ΩA|v(x,Z)(29)

Metric IGD [55] is about the distance of the non-dominated set ΩA relative to the reference set Ω.

IGD(ΩA,Ω)=1|Ω|yΩ,xΩAmind(x,y)(30)

where ΩA is the non-dominated set of algorithm A, and Ω is the set composed of non-dominated solutions obtained by all algorithms. Z is the reference point and set to (1.1,1.1). ν(x,Z) represents the hypervolume between reference point Z and x, d(x,y) represents the distance between solution x and solution y in the normalized objective space.

Three comparative algorithms are chosen. Hao et al. [24] developed HSGA for FJSP, aiming to minimize maximum machine load, total machine load, and makespan. Xiao et al. [25] presented IMOEA/D for FJSP considering makespan, tool number, machine load, and machine energy consumption. Luan et al. [26] proposed ENSGA-II for FJSP with makespan, total delay time, and total energy consumption.

When HSGA is used to solve FJPBPMSP, BPM assignment string, BPM scheduling string, POX, and 𝒩1 on the BPM scheduling string and uniform crossover and 𝒩6 on the BPM assignment string are added, crossovers of four strings are executed sequentially, and mutations of four strings are done sequentially as done in HSGA.

For IMOEA/D, when crossover is performed, POX on scheduling string, POX on BPM scheduling string, uniform crossover on machine assignment string and uniform crossover on BPM assignment string are used, 𝒩1 is mutation operator of BPM scheduling string and 𝒩6 is mutation operator of BPM assignment string, during both crossover and mutation, the corresponding operations are executed sequentially on the four strings.

When ENSGA-II is applied, POX and uniform crossover are used on the BPM scheduling string and the BPM assignment string, respectively. When crossover is performed, the corresponding operator is applied sequentially to each of the four strings. Similarly, for mutation, the 𝒩1 and 𝒩6 are used for BPM scheduling string and BPM assignment string respectively, with each string being mutated sequentially using its designated operator.

To show the effect of new strategies of CAICA, a ICA is constructed, in which initial empires formation and imperialist competition are done in the general ICA. In each empire k, when assimilation is done for each colony xiTk, one operator is randomly chosen from POX and uniform crossover and executed between xi and IMk, during revolution, a neighborhood structure is randomly selected from 𝒩1 to 𝒩6 and applied.

5.2 Parameter Settings

Parameters of CAICA are N, Nim, φ, β, ρ and stopping condition.

Extensive experiments indicate that CAICA converges well when max_gen=1200; moreover, when max_gen=1200 is applied, all comparative algorithms also converge well, so this parameter is adopted as the stopping condition for all algorithms.

The Taguchi method was used to determine the settings of other parameters by using instance mLa25. CAICA, with each parameter combination, randomly runs 20 times for the chosen instance. There are 16 parameter combinations according to the orthogonal arrayL16(45). Table 4 shows the levels of each parameter. Fig. 4 gives the results of IGD and the S/N ratio based on IGD. The S/N ratio is defined as 10×log10(IGD2). As shown in Fig. 4, the best settings are N=100, Nim=5, φ=0.5, β=1.3, ρ=6.

images

images

Figure 4: Main effect plot for means and S/N ratios.

For ICA, the parameters besides the stopping condition are N and Nim, both adopting the same settings as CAICA. The parameter settings for the three comparative algorithms were directly adopted from their references. Experimental results showed that these parameter settings remain effective and enable the corresponding algorithms to achieve their best performance.

5.3 Results and Analyses

CAICA, ICA, HSGA, IMOEAD, and ENSGA-II are independently run 20 times for each instance. Tables 58 report the computational results and running time, where C indicates CAICA, I represents ICA, H denotes HSGA, D means IMOEA/D, and E stands for ENSGA-II. Table 9 presents the means and standard deviations of the metrics, computed for each algorithm based on 50 problem instances. Taking IGD as an example, the values for each algorithm are obtained from each instance and then aggregated into the reported statistics. Table 10 describes the results of a paired-sample t-test for the four pairs of compared algorithms on metrics IGD, HV, and 𝒞. In Table 10, the notation t-test (B1, B2) means that a paired t-test is conducted to judge whether algorithm B1 gives a better sample mean than B2. If the significance level is set at 0.05, a statistically significant difference between B1 and B2 is established when the p-value is less than 0.05. Fig. 5 gives the boxplots of five algorithms, Fig. 6 shows the distributions of non-dominated solutions generated by all algorithms for three instances.

images

images

images

images

images

images

images images

Figure 5: Box plots of five algorithms.

images

Figure 6: Distribution diagram of non-dominated solutions.

As shown in Table 5, CAICA achieves an IGD of 0 on 22 of 50 instances, and ICA yields significantly larger IGD values than CAICA on most instances. Obviously, removing new strategies from CAICA severely degrades its convergence performance. It can be seen from Table 6 that HV of CAICA is greater than 0.8 on 44 instances and larger than 1.1 on 36 instances, while HV of ICA is less than 0.5 on most instances. Table 7 reveals 𝒞(C,I) is 1 for all instances, that is, all non-dominated solutions of ICA are dominated by ones of CAICA. As shown in Table 9, CAICA has an average IGD value of 0.067 with a standard deviation of 0.115, while the average IGD of ICA reaches 0.929 with a standard deviation of 0.265. CAICA has an average HV value of 1.074 with a standard deviation of 0.192, while the average HV of ICA reaches 0.156 with a standard deviation of 0.138. CAICA produces better average and standard deviation than ICA. Additionally, all p-values presented in Table 10 show that CAICA outperforms ICA in all three metrics, confirming the statistically significant superiority of CAICA in all three metrics. The above analyses confirm significant performance differences between CAICA and ICA. This conclusion can be drawn from Figs. 5 and 6. So new strategies of CAICA have a positive impact on its performance, and new strategies are effective.

When CAICA is compared with HSGA, it can be found from Tables 57 that CAICA performs better than HSGA. CAICA obtains a smaller IGD than HSGA on 46 instances and achieves a higher HV than HSGA on 45 instances. 𝒞(C,H) is greater than 𝒞(H,C) on 43 instances and 𝒞(C,H) is 1 on 37 instances. Figs. 5 and 6 also reveal that there are significant performance differences between CAICA and HSGA.

It can also be concluded from Tables 57 that CAICA outperforms IMOEA/D. CAICA obtains a smaller IGD than IMOEA/D on 40 instances and achieves a higher HV than IMOEA/D on 40 instances. Table 7 reveals that non-dominated solutions generated by CAICA completely dominate solutions of IMOEA/D on 28 instances. The performance advantages of CAICA can also be seen from Figs. 5 and 6.

As listed in Tables 57, CAICA obtains smaller IGD than ENSGA-II on 45 instances and achieves higher HV on 44 instances, 𝒞(C,E) is bigger than 𝒞(E,C) on 45 instances and 𝒞(C,E)=1 on 42 instances. Obviously, CAICA performs better than ENSGA-II, and this conclusion can also be seen from Figs. 5 and 6.

As shown in Table 9, CAICA achieves the best mean values and standard deviations for IGD and HV. The mean and standard deviation of 𝒞(X,C) are smaller than those of 𝒞(C,X), indicating the overall superiority of its solution set. All p-values in Table 10 demonstrate that CAICA can provide better results than the comparative algorithms in a statistical sense.

As analyzed above, CAICA can provide better results than its three comparative algorithms within a similar amount of time, as shown in Table 8. The good performance of CAICA results from its new strategies. Empires have new structures with two imperialists and two learning objects that are used adaptively in assimilation and revolution; all empires have the same number of colonies, and no elimination is applied; moreover, cooperation, adaptive adjustment on search strategies, colonies, and imperialists are also adopted. These things can result in high diversity of population and strong exploration ability; thus, CAICA is a very competitive method for solving FJPBPMSP.

6  Conclusion

FJPBPMSP is an integration of FJSP and parallel BPM scheduling and is seldom considered. In this study, a new algorithm named CAICA is proposed to minimize makespan and total energy consumption. In CAICA, a four-string representation is adopted, and initial empires with a new structure are formed by uniformly dividing the population. An adaptive assimilation and revolution are given. An adaptive imperialist competition with cooperation is provided. Search strategies, imperialists, and colonies are also updated. Extensive experiments are conducted, and comparisons with peer algorithms validate the rationality and effectiveness of the new strategies and the competitive advantages of CAICA for solving FJPBPMSP.

Distributed FJPBPMSP, as well as the problem with practical constraints such as maintenance and setup time, has seldom been considered. The integration of meta-heuristics with reinforcement learning (RL) presents a promising path to solve FJPBPMSP; thus, the application of RL-based meta-heuristics to various FJPBPMSP will be our primary future research direction. On the other hand, flexible flow shop scheduling problems with BPM extensively exist in real-life manufacturing processes. The problem has high complexity and high research value, so the problem and its optimization algorithms are also our future topics.

Acknowledgement: The authors would like to thank the School of Automation, Wuhan University of Technology, for providing the research platform and technical support.

Funding Statement: The authors received no specific funding for this study.

Author Contributions: Study conception and design: Jie Wang; analysis and interpretation of results: Jie Wang; draft manuscript preparation: Jie Wang; review and editing: Deming Lei. All authors reviewed and approved the final version of the manuscript.

Availability of Data and Materials: All standard instances used in this study are publicly available in the open-source toolkit, which can be accessed via the following link: https://github.com/wulijie-coder/FJSP-.

Ethics Approval: Not applicable.

Conflicts of Interest: The authors declare no conflicts of interest.

References

1. Zhang H, Lv S, Xin D, Jin H. A genetic algorithm enhanced with neighborhood structure for general flexible job shop scheduling with parallel batch processing machine. Expert Syst Appl. 2025;265(4):125888. doi:10.1016/j.eswa.2024.125888. [Google Scholar] [CrossRef]

2. Song L, Liu C, Shi H. Discrete particle swarm algorithm with Q-learning for solving flexible job shop scheduling problem with parallel batch processing machine. J Phys Conf Ser. 2022;2303(1):012022. doi:10.1088/1742-6596/2303/1/012022. [Google Scholar] [CrossRef]

3. Zandieh M, Khatami AR, Rahmati SHA. Flexible job shop scheduling under condition-based maintenance: improved version of imperialist competitive algorithm. Appl Soft Comput. 2017;58(7):449–64. doi:10.1016/j.asoc.2017.04.060. [Google Scholar] [CrossRef]

4. Li M, Lei D. An imperialist competitive algorithm with feedback for energy-efficient flexible job shop scheduling with transportation and sequence-dependent setup times. Eng Appl Artif Intell. 2021;103(1):104307. doi:10.1016/j.engappai.2021.104307. [Google Scholar] [CrossRef]

5. Abedi M, Seidgar H, Fazlollahtabar H, Bijani R. Bi-objective optimisation for scheduling the identical parallel batch-processing machines with arbitrary job sizes, unequal job release times and capacity limits. Int J Prod Res. 2015;53(6):1680–711. doi:10.1080/00207543.2014.952795. [Google Scholar] [CrossRef]

6. Brucker P, Schlie R. Job-shop scheduling with multi-purpose machines. Computing. 1990;45(4):369–75. doi:10.1007/BF02238804. [Google Scholar] [CrossRef]

7. Dauzère-Pérès S, Ding J, Shen L, Tamssaouet K. The flexible job shop scheduling problem: a review. Eur J Oper Res. 2024;314(2):409–32. doi:10.1016/j.ejor.2023.05.017. [Google Scholar] [CrossRef]

8. Soto C, Dorronsoro B, Fraire H, Cruz-Reyes L, Gomez-Santillan C, Rangel N. Solving the multi-objective flexible job shop scheduling problem with a novel parallel branch and bound algorithm. Swarm Evol Comput. 2020;53:100632. doi:10.1016/j.swevo.2019.100632. [Google Scholar] [CrossRef]

9. Gao KZ, Suganthan PN, Tasgetiren MF, Pan QK, Sun QQ. Effective ensembles of heuristics for scheduling flexible job shop problem with new job insertion. Comput Ind Eng. 2015;90:107–17. doi:10.1016/j.cie.2015.09.005. [Google Scholar] [CrossRef]

10. Ozturk G, Bahadir O, Teymourifar A. Extracting priority rules for dynamic multi-objective flexible job shop scheduling problems using gene expression programming. Int J Prod Res. 2019;57(10):3121–37. doi:10.1080/00207543.2018.1543964. [Google Scholar] [CrossRef]

11. Yazdani M, Gholami M, Zandieh M, Mousakhani M. A simulated annealing algorithm for flexible job-shop scheduling problem. J Appl Sci. 2009;9(4):662–70. doi:10.3923/jas.2009.662.670. [Google Scholar] [CrossRef]

12. Shen L, Dauzère-Pérès S, Neufeld JS. Solving the flexible job shop scheduling problem with sequence-dependent setup times. Eur J Oper Res. 2018;265(2):503–16. doi:10.1016/j.ejor.2017.08.021. [Google Scholar] [CrossRef]

13. Yazdani M, Amiri M, Zandieh M. Flexible job-shop scheduling with parallel variable neighborhood search algorithm. Expert Syst Appl. 2010;37(1):678–87. doi:10.1016/j.eswa.2009.06.007. [Google Scholar] [CrossRef]

14. Zhang J, Yang J. Flexible job-shop scheduling with flexible workdays, preemption, overlapping in operations and satisfaction criteria: an industrial application. Int J Prod Res. 2016;54(16):4894–918. doi:10.1080/00207543.2015.1134839. [Google Scholar] [CrossRef]

15. Gu XL, Huang M, Liang X. A discrete particle swarm optimization algorithm with adaptive inertia weight for solving multiobjective flexible job-shop scheduling problem. IEEE Access. 2020;8:33125–36. doi:10.1109/ACCESS.2020.2974014. [Google Scholar] [CrossRef]

16. Huang RH, Yang CL, Cheng WC. Flexible job shop scheduling with due window—a two-pheromone ant colony approach. Int J Prod Econ. 2013;141(2):685–97. doi:10.1016/j.ijpe.2012.10.011. [Google Scholar] [CrossRef]

17. Luo S, Zhang L, Fan Y. Energy-efficient scheduling for multi-objective flexible job shops with variable processing speeds by grey wolf optimization. J Clean Prod. 2019;234(1):1365–84. doi:10.1016/j.jclepro.2019.06.151. [Google Scholar] [CrossRef]

18. Ebrahimi A, Jeon HW, Lee S, Wang C. Minimizing total energy cost and tardiness penalty for a scheduling-layout problem in a flexible job shop system: a comparison of four metaheuristic algorithms. Comput Ind Eng. 2020;141(1):106295. doi:10.1016/j.cie.2020.106295. [Google Scholar] [CrossRef]

19. Jq Li, Jw Deng, Cy Li, Yy Han, Tian J, Zhang B, et al. An improved jaya algorithm for solving the flexible job shop scheduling problem with transportation and setup times. Knowl-Based Syst. 2020;200(3):106032. doi:10.1016/j.knosys.2020.106032. [Google Scholar] [CrossRef]

20. Caldeira RH, Gnanavelbabu A, Vaidyanathan T. An effective backtracking search algorithm for multi-objective flexible job shop scheduling considering new job arrivals and energy consumption. Comput Ind Eng. 2020;149:106863. doi:10.1016/j.cie.2020.106863. [Google Scholar] [CrossRef]

21. Akram K, Bhutta MU, Butt SI, Jaffery SHI, Khan M, Khan AZ, et al. A pareto-optimality based black widow spider algorithm for energy efficient flexible job shop scheduling problem considering new job insertion. Appl Soft Comput. 2024;164:111937. doi:10.1016/j.asoc.2024.111937. [Google Scholar] [CrossRef]

22. Jiang T, Liu L, Zhu H. A Q-learning-based biology migration algorithm for energy-saving flexible job shop scheduling with speed adjustable machines and transporters. Swarm Evol Comput. 2024;90(7):101655. doi:10.1016/j.swevo.2024.101655. [Google Scholar] [CrossRef]

23. Zhao S, Zhou H, Zhao Y, Wang D. DQL-assisted competitive evolutionary algorithm for energy-aware robust flexible job shop scheduling under unexpected disruptions. Swarm Evol Comput. 2024;91(2):101750. doi:10.1016/j.swevo.2024.101750. [Google Scholar] [CrossRef]

24. Hao L, Zou Z, Liang X. Solving multi-objective energy-saving flexible job shop scheduling problem by hybrid search genetic algorithm. Comput Ind Eng. 2025;200(4):110829. doi:10.1016/j.cie.2024.110829. [Google Scholar] [CrossRef]

25. Xiao B, Zhao Z, Wu Y, Zhu X, Peng S, Su H. An improved MOEA/D for multi-objective flexible job shop scheduling by considering efficiency and cost. Comput Oper Res. 2024;167(1):106674. doi:10.1016/j.cor.2024.106674. [Google Scholar] [CrossRef]

26. Luan F, Zhao H, Liu SQ, He Y, Tang B. Enhanced NSGA-II for multi-objective energy-saving flexible job shop scheduling. Sustain Comput Inform Syst. 2023;39:100901. doi:10.1016/j.suscom.2023.100901. [Google Scholar] [CrossRef]

27. Fan L, Lei Q, Song Y, Liu Y, Yang Y. Knowledge-enhanced multi-objective memetic algorithm for energy-efficient flexible job shop scheduling with limited multi-load automated guided vehicles. Eng Appl Artif Intell. 2025;159(3):111771. doi:10.1016/j.engappai.2025.111771. [Google Scholar] [CrossRef]

28. Jula P, Leachman RC. Coordinated multistage scheduling of parallel batch-processing machines under multiresource constraints. Oper Res. 2010;58(4-part-1):933–47. doi:10.1287/opre.1090.0788. [Google Scholar] [CrossRef]

29. Li X, Chen H, Du B, Tan Q. Heuristics to schedule uniform parallel batch processing machines with dynamic job arrivals. Int J Comput Integr Manuf. 2013;26(5):474–86. doi:10.1080/0951192X.2012.731612. [Google Scholar] [CrossRef]

30. Xu R, Chen H, Li X. A bi-objective scheduling problem on batch machines via a pareto-based ant colony system. Int J Prod Econ. 2013;145(1):371–86. doi:10.1016/j.ijpe.2013.04.053. [Google Scholar] [CrossRef]

31. Jia Z, Yan J, Leung JYT, Li K, Chen H. Ant colony optimization algorithm for scheduling jobs with fuzzy processing time on parallel batch machines with different capacities. Appl Soft Comput. 2019;75:548–61. doi:10.1016/j.asoc.2018.11.027. [Google Scholar] [CrossRef]

32. Zhou S, Liu M, Chen H, Li X. An effective discrete differential evolution algorithm for scheduling uniform parallel batch processing machines with non-identical capacities and arbitrary job sizes. Int J Prod Econ. 2016;179(1):1–11. doi:10.1016/j.ijpe.2016.05.014. [Google Scholar] [CrossRef]

33. Lu S, Liu X, Pei J, Thai MT, Pardalos PM. A hybrid ABC-TS algorithm for the unrelated parallel-batching machines scheduling problem with deteriorating jobs and maintenance activity. Appl Soft Comput. 2018;66(2):168–82. doi:10.1016/j.asoc.2018.02.018. [Google Scholar] [CrossRef]

34. Zhou S, Xie J, Du N, Pang Y. A random-keys genetic algorithm for scheduling unrelated parallel batch processing machines with different capacities and arbitrary job sizes. Appl Math Comput. 2018;334(8):254–68. doi:10.1016/j.amc.2018.04.024. [Google Scholar] [CrossRef]

35. Kong M, Liu X, Pei J, Pardalos PM, Mladenovic N. Parallel-batching scheduling with nonlinear processing times on a single and unrelated parallel machines. J Glob Optim. 2020;78(4):693–715. doi:10.1007/s10898-018-0705-3. [Google Scholar] [CrossRef]

36. Xiao X, Ji B, Yu SS, Wu G. A tabu-based adaptive large neighborhood search for scheduling unrelated parallel batch processing machines with non-identical job sizes and dynamic job arrivals. Flex Serv Manuf J. 2024;36(2):409–52. doi:10.1007/s10696-023-09488-9. [Google Scholar] [CrossRef]

37. Hu K, Che Y, Ng TS, Deng J. Unrelated parallel batch processing machine scheduling with time requirements and two-dimensional packing constraints. Comput Oper Res. 2024;162(3):106474. doi:10.1016/j.cor.2023.106474. [Google Scholar] [CrossRef]

38. Ham AM, Cakici E. Flexible job shop scheduling problem with parallel batch processing machines: MIP and CP approaches. Comput Ind Eng. 2016;102(1):160–5. doi:10.1016/j.cie.2016.11.001. [Google Scholar] [CrossRef]

39. Ham A. Flexible job shop scheduling problem for parallel batch processing machine with compatible job families. Appl Math Model. 2017;45(3):551–62. doi:10.1016/j.apm.2016.12.034. [Google Scholar] [CrossRef]

40. Ji B, Zhang S, Yu SS, Xiao X, Chen C, Zheng G. Novel model and solution method for flexible job shop scheduling problem with batch processing machines. Comput Oper Res. 2024;161(1):106442. doi:10.1016/j.cor.2023.106442. [Google Scholar] [CrossRef]

41. Wang H, Xiong H, Zuo W, Shi S. An improved scatter search algorithm for solving job shop scheduling problems with parallel batch processing machine. Sci Rep. 2025;15(1):11872. doi:10.1038/s41598-025-92761-8. [Google Scholar] [PubMed] [CrossRef]

42. Xue L, Zhao S, Mahmoudi A, Feylizadeh MR. Flexible job-shop scheduling problem with parallel batch machines based on an enhanced multi-population genetic algorithm. Complex Intell Syst. 2024;10(3):4083–101. doi:10.1007/s40747-024-01374-7. [Google Scholar] [CrossRef]

43. Lei D, Zheng Y, Guo X. A shuffled frog-leaping algorithm for flexible job shop scheduling with the consideration of energy consumption. Int J Prod Res. 2017;55(11):3126–40. doi:10.1080/00207543.2016.1262082. [Google Scholar] [CrossRef]

44. Zhang G, Gao L, Shi Y. An effective genetic algorithm for the flexible job-shop scheduling problem. Expert Syst Appl. 2011;38(4):3563–73. doi:10.1016/j.eswa.2010.08.145. [Google Scholar] [CrossRef]

45. Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput. 2002;6(2):182–97. doi:10.1109/4235.996017. [Google Scholar] [CrossRef]

46. Nasim Z, Bano Z, Ahmad M. Analysis of efficient random permutations generation for security applications. In: Proceedings of the 2015 International Conference on Advances in Computer Engineering and Applications; 2015 Mar 19–20; Ghaziabad, India. p. 337–41. doi:10.1109/ICACEA.2015.7164726. [Google Scholar] [CrossRef]

47. Lei D, Li H. A cooperated imperialist competitive algorithm for unrelated parallel batch machine scheduling problem. Comput Mater Contin. 2024;79(2):1855–74. doi:10.32604/cmc.2024.049480. [Google Scholar] [CrossRef]

48. Yan B, Liu Y, Huang Y. Improved discrete imperialist competition algorithm for order scheduling of automated warehouses. Comput Ind Eng. 2022;168:108075. doi:10.1016/j.cie.2022.108075. [Google Scholar] [CrossRef]

49. Elyasi M, Selcuk YS, Özener OÖ, Coban E. Imperialist competitive algorithm for unrelated parallel machine scheduling with sequence-and-machine-dependent setups and compatibility and workload constraints. Comput Ind Eng. 2024;190(3):110086. doi:10.1016/j.cie.2024.110086. [Google Scholar] [CrossRef]

50. Luo J, Zhou J, Jiang X. A modification of the imperialist competitive algorithm with hybrid methods for constrained optimization problems. IEEE Access. 2021;9:161745–60. doi:10.1109/ACCESS.2021.3133579. [Google Scholar] [CrossRef]

51. Tay JC, Ho NB. Evolving dispatching rules using genetic programming for solving multi-objective flexible job-shop problems. Comput Ind Eng. 2008;54(3):453–73. doi:10.1016/j.cie.2007.08.008. [Google Scholar] [CrossRef]

52. Hurink J, Jurisch B, Thole M. Tabu search for the job-shop scheduling problem with multi-purpose machines. OR Spektrum. 1994;15(4):205–15. doi:10.1007/BF01719451. [Google Scholar] [CrossRef]

53. Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans Evol Comput. 1999;3(4):257–71. doi:10.1109/4235.797969. [Google Scholar] [CrossRef]

54. While L, Hingston P, Barone L, Huband S. A faster algorithm for calculating hypervolume. IEEE Trans Evol Comput. 2006;10(1):29–38. doi:10.1109/TEVC.2005.851275. [Google Scholar] [CrossRef]

55. Wang Y, Han Y, Wang Y, Pan QK, Wang L. Sustainable scheduling of distributed flow shop group: a collaborative multi-objective evolutionary algorithm driven by indicators. IEEE Trans Evol Comput. 2024;28(6):1794–808. doi:10.1109/TEVC.2023.3339558. [Google Scholar] [CrossRef]


Cite This Article

APA Style
Wang, J., Lei, D. (2026). An Adaptive Imperialist Competitive Algorithm with Cooperation for Flexible Jobshop and Parallel Batch Processing Machine Scheduling. Computers, Materials & Continua, 87(3), 79. https://doi.org/10.32604/cmc.2026.076202
Vancouver Style
Wang J, Lei D. An Adaptive Imperialist Competitive Algorithm with Cooperation for Flexible Jobshop and Parallel Batch Processing Machine Scheduling. Comput Mater Contin. 2026;87(3):79. https://doi.org/10.32604/cmc.2026.076202
IEEE Style
J. Wang and D. Lei, “An Adaptive Imperialist Competitive Algorithm with Cooperation for Flexible Jobshop and Parallel Batch Processing Machine Scheduling,” Comput. Mater. Contin., vol. 87, no. 3, pp. 79, 2026. https://doi.org/10.32604/cmc.2026.076202


cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 190

    View

  • 49

    Download

  • 0

    Like

Share Link