[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.021719
images
Article

A Hybrid Modified Sine Cosine Algorithm Using Inverse Filtering and Clipping Methods for Low Autocorrelation Binary Sequences

Siti Julia Rosli1,2, Hasliza A Rahim1,2,*, Khairul Najmy Abdul Rani1,2, Ruzelita Ngadiran2,3, Wan Azani Mustafa3,4, Muzammil Jusoh1,2, Mohd Najib Mohd Yasin1,2, Thennarasan Sabapathy1,2, Mohamedfareq Abdulmalek5, Wan Suryani Firuz Wan Ariffin2 and Ahmed Alkhayyat6

1Advanced Communication Engineering, Centre of Excellence (ACE), Universiti Malaysia Perlis (UniMAP), 01000 Kangar, Perlis, Malaysia
2Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia
3Advanced Computing (AdvCOMP), Centre of Excellence, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia
4Faculty of Electrical Engineering Technology, Universiti Malaysia Perlis (UniMAP), 02600 Arau, Perlis, Malaysia
5Faculty of Engineering and Information Science, University of Wollongong in Dubai, Dubai 20183, United Arab Emirates
6Department of Computer Technical Engineering, College of Technical Engineering, the Islamic University, 54001 Najaf, Iraq
*Corresponding Author: Hasliza A Rahim. Email: haslizarahim@unimap.edu.my
Received: 12 July 2021; Accepted: 19 October 2021

Abstract: The essential purpose of radar is to detect a target of interest and provide information concerning the target's location, motion, size, and other parameters. The knowledge about the pulse trains’ properties shows that a class of signals is mainly well suited to digital processing of increasing practical importance. A low autocorrelation binary sequence (LABS) is a complex combinatorial problem. The main problems of LABS are low Merit Factor (MF) and shorter length sequences. Besides, the maximum possible MF equals 12.3248 as infinity length is unable to be achieved. Therefore, this study implemented two techniques to propose a new metaheuristic algorithm based on Hybrid Modified Sine Cosine Algorithm with Cuckoo Search Algorithm (HMSCACSA) using Inverse Filtering (IF) and clipping method to achieve better results. The proposed algorithms, LABS-IF and HMSCACSA-IF, achieved better results with two large MFs equal to 12.12 and 12.6678 for lengths 231 and 237, respectively, where the optimal solutions belong to the skew-symmetric sequences. The MF outperformed up to 24.335% and 2.708% against the state-of-the-art LABS heuristic algorithm, xLastovka, and Golay, respectively. These results indicated that the proposed algorithm's simulation had quality solutions in terms of fast convergence curve with better optimal means, and standard deviation.

Keywords: Merit factor; autocorrelation; skew-symmetric sequences; combinatorial optimization; sine cosine algorithm; cuckoo search algorithm; radar system; wearable antenna; antenna and propagation

1  Introduction

Radar is a remote sensing tool that uses radio waves. Its main objective is to detect a target of interest and provide information about the position, movement, size, and other parameters related to the target. Any inquiry related to target detection is resolved using a standard radar system through a transmitted radio signal and the waveform of the target detected along with the presence of inevitable system noise and unwanted scatter reflections (clutter). If an adequate return signal strength is received, it is further analyzed to determine the target's length, speed shape, size, and trajectory. This method is known as an estimation of parameters. The distance of the target is determined by calculating the retardation signal. Similarly, it can be calculated by analyzing the change in carrier frequency (Doppler change) and neglecting the obtained waveform's higher-order effects. The transmitted signal can also be carefully selected and produced to optimize its extraction capacity. The radar transmits the fundamental signal, then the signal bounces off the object, and a receiver shall later receive it. It is similar to when a sound echo bounced off upon hitting a wall. However, the sound is not used as a warning by radars. Instead, it uses electromagnetic waves known as radio waves and microwaves. Radar measurement on range or distance is made possible by radiated electromagnetic energy properties [1].

A metaheuristic is a process of designing heuristic procedures to identify the ideal solution to complex issues in the optimization algorithm and provide quality results. It could be seen in previous literature that various techniques produce practical components for the algorithm and generate a high volume of information during the iteration process. These capabilities assist designers in obtaining high-performing value parameters and are quickly involved in an arrangement strategy to facilitate the metaheuristic algorithm's decision-making process. The metaheuristic-based algorithm is also a general solver template adapted for various optimization problems by appropriately tweaking its operators and configuring its parameters. The optimization algorithms can be categorized into five classes as: (1) Evolutionary Algorithm (EA), (2) Swarm-Based Algorithms (SBA), (3) Physics-Based Algorithms (PBA), (4) Population-Based Algorithms (PBa), and (5) Human-based Algorithms (HBA). At each generation, the EA recombines the current population's preferable characteristics to come up with a new population that will be selected based on the natural selection principle [2]. Examples of EA are Genetic Algorithm (GA) [3], Evolution Strategy (ES) [4], Genetic Programming (GP) [5] until the recent algorithms, Biogeography Based Optimizer (BBO) [6], Evolutionary Programming (EP) [7], Differential Evolution (DE) [8] and Earth-Worm Optimization Algorithm (EWA) [9]. Based on these advantages, GA becomes one of the most representative techniques in solving complex optimization problems [10]. However, the standard GA has its limitations, such as the premature convergence, a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the multi−peak function optimization problem, low speed of convergence, and in some cases, unable to reach the global optimal convergence state. These shortages are primarily due to static crossover and mutation operators’ restrictions in the standard GA itself. Furthermore, the number of exposed elements to mutation is extensive, with an exponential increase in search space size. The further destructive mutation makes the original GA extremely difficult to deploy, particularly on complex engineering problems, such as optimization of high MF for long skew-symmetric LABS. SBA mimics the foraging of animals that live in a group [1113]. Particle Swarm Optimization (PSO) [14], Bat Algorithm (BA) [15], Grey Wolf Optimizer (GWO) [11] until the recent algorithms such as Firefly Algorithm (FFA) [16], Ant Lion Optimization (ALO) [12], Whale Optimization Algorithm (WOA) [13] and Monarch Butterfly Optimization (MBO) [17]. PBA employs some rules of physics to perform optimization such as Gravitational Search Algorithm (GSA) [18], Charged System Search (CSS) [19] until the recent algorithms, i.e., Slime Mould Algorithm (SMA) [20]. Meanwhile, PBa shares some features regardless of their nature such as Stochastic Fractal Search (SFS) [21], SCA [22], and Harris Hawks Optimization (HHO) [23]. On the other hand, HBA mimics the behavior of a group of animals when searching for food. The solutions are typically constructed at each iteration based on historical information gained by previous generations. The algorithms that fall into this classification are Harmony Search (HS) [24], and Heap Based Optimizer (HBO) [25].

As suggested by Mirjalili et al. [22], SCA is a recent population-based metaheuristic optimization algorithm, which has received significant attention from many researchers to solve optimization problems. For SCA issues, the selection features of classical SCA are used to increase exploration and exploitation efficiency using various attempts. Specifically, SCA has been used as the main algorithm to enhance the circuit design requirements by reducing the area occupied by circuit transistors [26]. SCA relies on an iterative strategy to obtain a random search in the solution space like most SBA. SCA cannot guarantee that the optimal solution is achieved in one operation. However, when the initial value setting and the number of iterations set are sufficiently large, the optimal solution's searching is enhanced significantly [10,27]. Other recent evolutionary computation algorithms such as MBO, EWA, EHO, and MSA [28] have been widely applied in engineering problems where these algorithms exhibit good performances in real engineering application [29]. In conclusion, every random optimization algorithm has advantages and disadvantages. According to the “No Free Lunch” theory, no optimization can effectively solve the whole optimization problem. Their global search and local search abilities to search for optimal solutions play a crucial role in determining the random optimization algorithms’ capacity [10]. Many studies have been conducted to improve the performance of SCA using modification and hybridization techniques. The research gap of this study was performed for the state-of-the-art modification and hybridization techniques of SCA. SCA tends to be trapped into sub-optimal regions. The improvement of SCA by Opposition Based Learning (OBL) for a better exploration of the search space has generated more accurate optimal solutions. However, OBL could not converge to an optimal solution before achieving the maximum function [30]. Sindhu et al. [31] proposed MSCA using the elitism strategy and upgraded the new solution by selecting the ideal features to improve SCA accuracy. These modifications aimed to develop an optimized quality with higher precision and reduce the number of features using a wrapper-based feature selection. Based on the tested ten benchmark datasets in this experiment, it was proven that MSCA achieved higher efficiency with fewer features. However, the proposed algorithms by [31,32] were unable to guarantee the detection of accurate convergence for the proposed objective function and immature convergence as well as inaccuracy phase for some benchmark test functions. The modified SCA by [3335], the hybrid SCA-PSO [36], the hybrid modified PSO-SCA and Levy flight [37], and the hybrid PSO-SCA [38] suffered from immature convergence due to the poor exploitation. All of the algorithms, as mentioned earlier, had the redundancies problem in search agent initialization. In a more recent study, researchers proposed a novel modified SCA to improve the algorithm performance using a linear searching path and the empirical parameter [10]. The work in [39] introduced a modified SCA with Teacher Supervision Learning (TSL-SCA) to increase the convergence speed. The GWO-SCA hybridization by [40] implemented the position of grey wolf (alpha) in the GWO algorithm by position update in the equation of SCA. This new hybridization approach was developed to explore and exploit the diversity of the algorithm. The proposed hybridization of SCA by [41] and single feature modification of SCA by [42] had the local trap problem due to poor exploration. The improvement in SCA with PSO [43] was better than the conventional SCA and other metaheuristics in terms of precision and duration for calculation. However, this hybridization of SCA [43] and DE [44] also had poor exploration in finding the global optimal solutions. These algorithms could not control the fourth parameter of SCA, r4, from being biased towards exploration or exploitation. This limitation was due to r4 since r4 was a switching parameter of random numbers between 0 and 1.

Determining the optimal binary sequences with high MF is the main problem in LABS for the radar system. Low MF will lead the radar system to consume high energy. The challenge of obtaining optimal binary sequences is hard to tackle due to the increasing computational complexity and incapability of designing long sequences. In addition, it is hard to achieve the possible maximum MF of 12.3248 for an infinite length of LABS [4547]. Therefore, in this work, a hybrid modified sine cosine algorithm with cuckoo search algorithm using inverse filtering and clipping method is proposed for the first time to improve SCA by avoiding local trap and obtaining the global optimal solution of the real engineering problem, i.e., LABS. In this context, the global optimal solution is the maximum value of MF for LABS. The main contributions of this study are as follows: (1) LABS-IF: The use of inverse filtering and clipping methods in low autocorrelation binary sequences can improve the search of high MF in optimal binary sequences. (2) HMSCACSA-IF: The proposed algorithm can reduce the redundancy problem in the search agent's initialization of population and use a hybrid mechanism to balance the algorithm's exploration and exploitation capabilities. As a result, the search for the global optimal LABS with high MF can be achieved significantly.

The paper is organized as follows: The basic methodology used to design the proposed algorithm is outlined in Section 2. Section 3 introduces the proposed LABS using HMSCACSA with IF is described in detail, including how it is implemented and working principles. Section 4 discusses the proposed algorithms performance using LABS-IF and HMSCACSA-IF methods, which had the highest MF values with low Energy Level (EL). The analysis, simulation, and statistical results are presented in Section 5 to report both methods’ efficiency. Section 6 concludes this article and the outlook of research.

2  Related Works

2.1 General Fundamental for Designing Sequences with Low Autocorrelation Function

Finite length sequences with an ideal Auto Correlation Function (ACF) consist only of a pulse of many applications in control and communication systems. It can be used in radar pulse compression systems and deep space problems. The development of solid antenna arrays has two critical effects on the radar system design. First, the peak power limitations allow long-lasting waveforms to get the signal energy needed over the desired range. Second, the ability to turn the beam at high speed gives the radar a multifunctional capacity so that a range of waveforms can be used flexibly. Theoretical experiments, which provide the basis for technological advancement, have not solved the general problem of signal design. Therefore, knowledge of pulse trains’ properties, a class of signals which is especially suitable for digital processing, is of increasing practical importance. There are two crucial features of such sequences. One is the Energy Ratio (ER), which is defined as the total sequence energy ratio to the most significant pulse energy [48]. The other property is total Sidelobe Energy (SLE), i.e., energy in the sidelobes of the ACF of the sequence. The problem of aperiodic LABS is based on the consideration of a binary sequence A = a1, a2,…aL of length L, where each ai{±1}, and the ACF is:

Ck(A)=i=1Lkaiai+kfork=1,L1(1)

and minimize the energy function of E is:

E(A)=k=1L1Ck2(A)(2)

or alternatively, maximize the MF [46]:

MF=L22E(A)(3)

2.2 Clipping Method

The energy level defined by Eq. (2) is a merit figure for the energy sequence distribution. Binary sequences with all element amplitudes equal to unity are achievable for strictly phase-modulated pulse trains, such as the maximum energy value. The energy level within this case is the same as n (the sequence length). The energy level depends on two independent variables: the magnitude of the maximum pulse max |si| and the total energy, E of the pulse train. The energy level of a multilevel sequence of a given length may increase the energy of the highest max|si| by lowering the energy. This is done through the ‘clipping’ process, where the highest pulse of the sequence is clipped to a predetermined level. The level of the clipping is defined as follows:

CLIP=max(Pulse.Height)min(Pulse.Height)100.x(4)

Each step defines the amount of clipping. x is the percentage that the cut-off for the clipped sequence would not have been a true multilevel sequence, and its ACF would not have been a pulse. The details of CLIP method are provided in [49].

2.3 Inverse Filtering Method

The optimum inverse (or least error energy) filter problem is formulated with reference to Fig. 1. The sequence of errors {ei} defines a difference between the current output sequence of filters {di} and the actual output filter sequence {ci}. {ai} is the input length series (n+1). In the present case, the desired filter output sequence consists of a unit “spike” at some time index ‘q’. The optimization problem is to select the filter sequence {fi} of length (m+1) to reduce the energy consumption of {ei} error series. The details of IF method is provided in [50].

images

Figure 1: The optimum inverse filter problem

2.4 Review of the State-of-the-Art Work of LABS Techniques

LABS problem is a complex combinatory problem, and its solutions are crucial for many practical applications. It is vital to have sequences of low autocorrelation sidelobe energy in radar applications to minimize the noise and improve the radar's ability to track multiple targets. The challenge of determining the maximum MF for binary sequences is an old and complicated combinatorial optimization problem [51]. In communication engineering and statistical mechanics such as the Bernasconi model [52], the main problem is the mathematically Littlewood polynomials [53].

The research gap of LABS related work was performed in recent years from the related SCIE journals and selected proceedings (2015–2020). The proposed algorithm by Mow et al. [54] had low values of MF, less than 5. The pairwise interacting spins method with length L, introduced by [52], was only for short binary sequence length, ranging from 2 to 82 with low MF from 2 to 8.918. The theoretical minimum energy analysis proposed by [55] obtained a low MF of 10.23 with lengths from 61 to 304. The combined branch and bound algorithm measured all sequences with the highest MF to have small L values less than 119 with MF of 9 [56]. Not all the best solutions under this limit were optimal for each length. Bošković et al. [45] improved MFs using three solvers (lssMAts and lssRRts, lssOrel). This algorithm had the best opportunity to find practical solutions where MF was slightly closer to 12.3248 conjectured asymptotic meaning. The best MF increased from 7.1287 to 8.0918, representing a decrease of 560 energy levels from 4.705 to 4.145. However, not all the best solutions under this limit were optimal for each length. Farnane et al. [47] obtained small binary sequences length (2 until 10) with a range value of MF in 3 until 4 but only length 3 had high MF value (8.17) than other lengths. The proposed LABS algorithms by [46,57,58] were unable to achieve the maximum possible MF 12.3248 as infinity length with low MF of 9, 3.52 and range 3 to 5.984, respectively. All the studies, as mentioned earlier, had lower MF with shorter length sequences and were unable to achieve the maximum possible MF of 12.3248 as infinity length. Therefore, this research work proposes the HMSCACSA-IF to determine some new best-known skew-symmetric sequences solutions with higher MF than the state-of-the-art algorithms over 225 sequence lengths.

3  Proposed of LABS Using HMSCACSA-IF

This section presents a systematic methodology in the three phases; the first phase presents a brief system description of the HMSCACSA. The second phase describes the design procedure of the proposed LABS-IF and HMSCACSA-IF for radar waveform. The third phase demonstrates the second phase's validation procedure using parametric studies with three critical internal parameters, i.e., Population, Dimension, and Range.

3.1 System Description of HMSCACSA

The variants are coded in MATLAB 2018a on a Core i5, 3.1 GHz system and simulated using the same Personal Computer (PC) for all experiments. Before presenting the construction method, it shall concisely expose the basic procedure on how the program runs. The source code is provided in the supplementary file. This version of the program is to compute the HMSCACSA-IF for LABS. The HMSCACSA involves two algorithm techniques based on SCA, namely modification and hybridization. SCA begins with a random distribution of a category of solutions. The new sample size generated by the Latin Hypercube Sampling (LHS) method exhibits higher stability and wider implementation in the SCA adjustment. The details of SCA algorithm and LHS method expressions in [27].

3.2 Proposed LABS-IF and HMSCACSA-IF

In this section, information related to the proposed LABS (sub-Section 2.1) problems using the IF (sub-Section 2.3) and clipping (sub-Section 2.2), its implementation, and the working principles are explained. Then, this LABS-IF is optimized using the HMSCACSA method. In this proposed algorithm, the advantages of both LABS and HMSCACSA are combined. The solution, A can be represented as a coordinate value pair, namely: ai ɛ {+1, −1} by 1 and 0, i.e., a binary length string L and a value of energy E(A). The range of A with length L is obtained by flipping exactly ones, i symbol in the sequence. There is a need for a neighborhood to conduct a local search or to stop walking. For example, in each step, the gradient walk (the steepest descent) moves in the direction of the lowest energy neighbor. First, the proposed algorithm, called LABS optimizer using IF method is presented. It is known that this proposed method has a good ACF. Sets of sequences are tested using clipping method for the best-known skew-symmetric sequences. The EL and MF calculations are applied. The high values in MF are then obtained, and the possible coordinates in skew-symmetric sequences are stored. The algorithm begins with an initial randomly generated solution. The algorithm stores a large number of promising solutions and systematically conducts local searches on these promising solutions. Second, for the sake of LABS-IF clarification, this method uses HMSCACSA to store the best coordinates number of pairs using the pseudocode presented in Algorithm 1. The search of LABS generates a new set of skew-symmetric solutions and will be replaced as a fitness function in HMSCACSA as a return. The returned fitness functions are updated with the best coordinates in skew-symmetric sequences with higher MF and low EL.

3.3 Validation of LABS-IF

The validation of the LABS issue with the length of the odd sequence in terms of statistical output analysis is carried out by measuring the consistency of the sequence. The problem of finding such a sequence is equivalent to the problem in interaction and periodic boundary conditions. MFs for binary sequences were generated from the proposed method (using the IF method) for sequence lengths from 190 to 225 [46]. The boxplots are measured for High MF using the IF method for length 51 until 237. The summary range of high MF displays the minimum value, Quartile 1, Median, Quartile 3, and the maximum value. These results demonstrate that the overall performance with the length that has a lower median, closed one quartile bias range, larger interquartile range, and lower mean than the other lengths. Best-known MFs and solutions obtained from the suggested method (using the IF method) solved by symmetries partition solutions into four quadrants with coordinate prefixes of [1 1], [−1 1], [1 −1], and [−1 −1]. The best new MF returned for canonic solutions coordinates to represent the first (L+1)/2 binary symbols in the run-length notation. For example, the solution coordinates for L =225 were listed as 8, 12, 2, 1, 4. The value of 8 implies a run of seven 0’s, followed by a run of twelve 1's. Note these solutions were in the canonic form: each solution begins with a run of at least two 0's. The results were compared among five data of high MF (using the IF method) in L =231. This is to prove the maximum iteration of IF for length L =15 has the maximum MF values. The energy value for five high MF data (using the IF method) was collected until L =227 to demonstrate that the low energy is directly proportional to the high value of MF. This proposed method was validated to determine the accurate convergence behavior for the faster runs and stability of optimization problems instead of early convergence curves trapped in the local maximum. The convergence curves of different algorithms in high MF function from the critical internal parameters (Population, Dimension, and Range Value) for five variation values are presented. The proposed algorithm was benchmarked against the state-of-the-art algorithms such as xLastovka [46], 1bCAN [57], and Dimitrov et al. [58].

3.4 Development of the Execution of the Proposed Algorithm

In this section, the experimental results are presented: First, an analysis of the proposed LABS-IF was further optimized by integrating with the HMSCACSA algorithm on small and middle-size skew-symmetric sequences. The proposed algorithms were compared to the state-of-the-art algorithms. Second, HMSCACSA-IF was tested on larger skew-symmetric sequences to find new best-known solutions. All algorithms were run for different instance sizes. 93 independent executions were performed for each algorithm and instance size. The termination criteria for each execution were either to find the optimal solution or reach a length limit. This limit was in the length of 51 until 237. Notably, the SCA exhibits sensitive parameter values, which could be fine-tuned to improve the capability of exploring optimal solutions in the global search domain. The performance graph approach is illustrated in a curvilinear way through an optimum focus point, and the details of the design specification are tabulated in Tab. 1. All comparative results were simulated using the optimized test function of three main parameters (population, dimension, and range value of r). Five variation values were analyzed to identify the critical dimension parameters affecting the proposed algorithm's performance based on [27]. When a specific parameter is varied, the others are held constant.

images

images

4  Performance Evaluation of LABS-IF and HMSCACSA-IF

In this experiment, the LABS-IF and HMSCACSA-IF algorithms were run 15 times for skew-symmetric sequences that had optimal solutions. Each run stopped when a target value of MF was reached. In this section, experiments were conducted on sequences with the long lengths, for L =151 up to L =237. For a fair comparison with other results from the literature [46] and [57,58], the interval's lengths were divided into two parts: 151L225 and 227L237.

4.1 Results on Large Skew Symmetric Sequences

The best-known solutions by the proposed algorithms were determined for L =191 up to L =239, as summarized in Tab. 2, where E, F, and best coordinate were enlisted, respectively. The proposed LABS-IF and HMSCACSA-IF algorithms achieved better solutions with F >11 for L =191 and up to L =237 compared to xLastovka algorithm [46]. The LAB-IF obtained F ≥ 12 when the L = 231 only. Meanwhile, HMSCACSA-IF achieved F ≥ 12 for larger sequences when L =235, 237, and 239 in the second length of the interval. The highest MF value in LABS-IF was F =12.12 for L =231 and HMSCACSA-IF was F =12.67 for L =237, which were skew-symmetric. Thus, the results show that the HMSCACSA-IF outperformed the LABS-IF algorithm in the range of $. The LABS-IF's target was equal to the currently best known (optimal for L =231) energy value of skew-symmetric sequences with the F =12.12. Meanwhile, for the HMSCACSA-IF, the currently best known (optimal for L =237) energy value of skew-symmetric sequences was with F= 12.67. Tab. 3 shows that the LABS-IF algorithm had different MF values for the same length, L =231, with five data sets. However, the fitness MF results for the proposed HMSCACSA-IF algorithm remained consistent. This inconsistency indicates that LABS-IF could not obtain similar F and E values for the same sequence length. Thus, the optimized MFs of LAB-IF were further enhanced using HMSCACSA. It can be seen that the F commensurate with the E where it enlarged as the length increased. Thus, it can be concluded that the random maximum value as a starting sequence was crucial to determine the best sequence of random variables, the outcome following the deterministic pattern, and the probability distributions. These constructs were extremely useful in probability theory. A random sequence that used property, such as a specific integer or length value, promises the proposed algorithms to obtain the desired sequence to serve as a starting sequence. In addition, further MFs comparison of the LABS-IF and HMSCACSA-IF algorithm between the recent solvers xLastovka [46], 1bCAN [57], and Dimitrov et al. [58], respectively for 191 ≤ L ≤225 was performed, as shown in Fig. 2. The LABS-IF algorithm was found to have 24 new best-known solutions. Among these solutions, there were eight new best-known skew-symmetric solutions for with F ≥ 10, whereas the HMSCACSA-IF algorithm just had two new best-known skew-symmetric solutions. On the contrary, the LABS-IF algorithm only had 13 new best-known skew-symmetric solutions with F ≥ 11 compared to the proposed HMSCACSA-IF algorithm with 18 new best-known symmetric solutions. HMSCACSA-IF algorithm had the best-known maximum value of F =12.14.

images

images

images

Figure 2: MFs of binary sequences obtained by the LABS-IF, HMSCACSA-IF, xLastovka [46], 1bCAN [57] and Dimitrov [58] algorithms

5  Validation of Parameter Analysis of HMSCACSA-IF Results

The designation of HMSCACSA-IF, which amalgamated both HMSCACSA with the IF method, was aimed to optimize the LABS more effectively. Three hypotheses of the postulated HMSCACSA successfully achieved as described in [50], which were the fastest convergence rate, best means, and standard deviation values in the following minimum or maximum of single objective optimization which had been decided. This was achieved in the third phase of the parameter sensitivity validation analysis for the HMSCACSA-IF. Figs. 35 show the comparison for five variation values of the three different optimizer parameters: number of populations, number of dimensions, and number of ranges, with respect to the number of iterations using Tab. 1. The results show that the HMSCACSA-IF achieved the fastest convergence whenever the population was 50, dimension was 20, and the range was 1.4, respectively.

Fig. 3 shows that the optimizer with a population of 50 achieved the fastest convergence after 12 iterations with the fitness value of 12.668. The optimizer with the dimension of 20 had the fastest convergence after just four iterations, as shown in Fig. 4. Furthermore, Fig. 5 shows that the optimizer with the diversity range of 1.4 achieved the fastest convergence after five iterations. Tab. 4 enlists the simulation iteration percentage differences for five variation values of three HMSCACSA-IF internal parameters. As illustrated in Figs. 6a and b, the box plots reveal several salient characteristics of the length 151 until 237 search processes. The LABS-IF algorithm was deployed to analyze the overall performance of the proposed HMSCACSA-IF that had higher medians than the LABS-IF algorithm. Even though the length 167 had a much wider box or far quartile bias range, it also had a smaller interquartile range, higher mean, and higher median than other lengths. Note that the median for the length 237 was about 12.66779 higher than the length 157 with the median of 9.273514. Moreover, the length 237 also had the widest box plot size, most extensive interquartile range, and highest mean compared to other lengths. The statistical analysis also confirms that HMSCACSA-IF had the fastest convergence with the maximum value of MF compared to the state-of-the-art algorithms [46,57,58].

images

Figure 3: The convergence curve of different number of populations using HMSCACSA-IF for high MF in L =237

images

Figure 4: The convergence curve of different number of dimensions using HMSCACSA-IF for high MF in L =237

images

Figure 5: Diversity plot with variants range of r exploration and exploitation using HMSCACSA-IF for high MF in L =237

images

The proposed HMSCACSA-IF had proven to possess better exploration and exploitation capabilities for global searching. The search for binary sequences with high MF represents a great computing challenge. The solvers are considered to accept odd values of the sequence L for the skew-symmetric binary best solutions to reduce the computing constraints. It has been found that not all the best solutions will be optimal for each L. The limitation of the LABS-IF and HMSCACSA-IF is that they tend to trap into a shorter length, 51 ≤ L ≤ 61, when the average value of MF is below five. In this study, L was set to the lowest ten registered length values using the same approach as in [56]. However, to determine the optimal F for either general or skew-symmetric sequences, HMSCACA-IF was benchmarked against theoretical considerations via approximation 12.3248 as L→∞ [59]. For some instances, the best MF improvement was significant, i.e., from 12.3248 to 12.6678 for L =237, representing an increment as much as a 2.71% difference for searching length 51 ≤ L ≤ 237 of HMSCACSA-IF algorithm. Hence, the subsequent search for odd sequences will be continued with a larger length to get the difference increment with higher MF than the HMSCACSA-IF algorithm. The significant limitations of LABS are solved through IF and clipping methods to find some new, best-known skew-symmetrical solutions with MF and EL values. The results obtained have significantly improved the results compared to the recent state-of-the-art algorithms, where the MF and EL of the proposed algorithms are better than xLastovka, 1bCAN, and Shotgun Hill-Climbing with their largest binary sequences length up to 267, 1000 and 300, respectively. These proposed methods are uniquely postulated to improve the search of local optimal solutions systematically.

images

Figure 6: Boxplots showing average of high MF (best fitness of HMSCACSA-IF) on the range (a) 151 ≤ L ≤ 195 and (b) 197 ≤ L ≤ 237

6  Conclusion

The study was extended further to focus on designing a newly modified HMSCACSA featuring IF to optimize the MF with low EL of LABS and odd sequences length for skew-symmetric binary sequences. The new analytical method for generating binary sequences from the multilevel sequences was done by repeating the inversion process several times using the random of maximum value as a starting sequence. Precisely, the input sequence was optimized to get superior performance with high MF in LABS. The LABS indicates the problem of a new stochastic algorithm. In longer length cases, the LABS-IF and HMSCACSA-IF algorithms are highly efficient. Six new best solutions with the MF values, F >10 for L =197 up to L =237 for the LABS-IF algorithm and 173 ≤ L ≤ 237 for the HMSCACSA-IF algorithm, respectively. Both proposed algorithms generate higher MF with low EL for large skew-symmetric binary sequences than the state-of-the-art heuristic algorithms. Within this case, the skew-symmetric sequences have a strange length, so they can be when sequences are searched for with no skew-symmetric constraint or the nearest odd length sequences. Also, except for the method used in the paper, some of the most representative computational intelligence algorithms can solve the problems, like MBO, EWA, EHO, MSA in obtaining high MF with low PSL for LABS. Finally, the proposed HMSCACSA-IF has a high potential to be applied for optimization of wireless communication systems, such as antenna array synthesis [60], microwave rectifier [61], and antenna structures optimization [62].

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1.  1.  K. Yao, F. Lorenzelli and C. E. Chen, “Introduction and motivation to detection and estimation,” In Detection and Estimation for Communication and Radar Systems, 1st ed. New York, U.S., Cambridge University Press, pp. 17–20, 2013.
  2.  2.  S. Das, S. S. Mullick and P. N. Suganthan, “Recent advances in differential evolution-an updated survey,” Swarm Evolutionary Computation, vol. 27, pp. 1–30, 2016.
  3.  3.  D. N. Mudaliar and N. K. Modi, “Applying m-mutation operator in genetic algorithm to solve permutation problems,” in Proceeding IEEE International Conference on Systems, Computation Automation and Networking (ICSCANPondicherry, India, pp. 1–5, 2019.
  4.  4.  M. G. Nejad, A. H. Kashan and S. M. Shavarani, “A novel competitive hybrid approach based on grouping evolution strategy algorithm for solving U-shaped assembly line balancing problems,” Production Engineering Research and Development, vol. 12, pp. 555–566, 2018.
  5.  5.  S. Bianco, G. Ciocca and R. Schettini, “Combination of video change detection algorithms by genetic programming,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 6, pp. 914–928, 2017.
  6.  6.  I. Aljarah, H. Faris, S. Mirjalili and N. Al-Madi, “Training radial basis function networks using biogeography-based optimizer,” Neural Computing and Applications, vol. 29, pp. 529–553, 2018.
  7.  7.  Y.Gangadhar, V. S. G. Akula and P. C. Reddy, “An evolutionary programming approach for securing medical images using watermarking scheme in invariant discrete wavelet transformation,” Biomedical Signal Processing and Control, vol. 43, pp. 31–40, 2018.
  8.  8.  Y. L. Li, Z. H. Zhan, Y. J. Gong, W. N. Chen, J. Zhang et al., “Differential evolution with an evolution path: A DEEP evolutionary algorithm,'’ IEEE Transactions Cybernetics, vol. 45, no. 9, pp. 1798–1810, 2015.
  9.  9.  A. Khan, N. Mushtaq, S. H. Faraz, O. A. Khan, M. A. Sarwar et al., “Genetic algorithm and earthwormoptimization algorithm for energy management in smart grid,” in F. Xhafa, S. Caballé, L. Barolli (Eds.Advances on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC 2017), Lecture Notes on Data Engineering and Communications Technologies, Switzerland: Springer, Cham, vol. 13, pp. 447–459, 2018.
  10. 10. M. Wang and G. Lu, “A modified sine cosine algorithm for solving optimization problems,” in IEEE Access, vol. 9, pp. 27434–27450, 2021.
  11. 11. W. Long, J. Jiao, X. Liang, S. Cai and M. Xu, “A random opposition-based learning grey wolf optimizer,” in IEEE Access, vol. 7, pp. 113810–113825, 2019.
  12. 12. A. S. Assiri, A. G. Hussien and M. Amin, “Ant lion optimization: Variants, hybrids, and applications,” in IEEE Access, vol. 8, pp. 77746–77764, 2020.
  13. 13. G. Kaur and S. Arora, “Chaotic whale optimization algorithm,” Journal of Computational Design & Engineering, vol. 5, no. 3, pp. 275–284, 2018.
  14. 14. M. Sharif, J. Amin, M. Raza, M. Yasmin and S. ChandraSatapathy, “An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor,” Pattern Recognition Letters, vol. 129, pp. 150–157, 2020.
  15. 15. X. Cai, H. Wang, Z. Cui, J. Cai, Y. Xue et al., “Bat algorithm with triangle-flipping strategy for numerical optimization,” International Journal of Machine Learning & Cybernetics, vol. 9, pp. 199–215, 2018.
  16. 16. K. Jagatheesan, B. Anand, S. Samantha, N. Dey, A. Ashour et al., “Design of a proportional-integral-derivative controller for an automatic generation control of multi-area power thermal systems using firefly algorithm,” in IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 2, pp. 503–515, 2019.
  17. 17. L. Sun, S. Chen, J. Xu and Y. Tian, “Improved monarch butterfly optimization algorithm based on opposition-based learning and random local perturbation,” Complexity, vol. 2019, no. 4182148, pp. 1–20, 2019.
  18. 18. Y. Wang, S. Gao, Y. Yu, Z. Wang, J. Cheng et al., “A gravitational search algorithm with chaotic neural oscillators,” IEEE Access, vol. 8, pp. 25938–25948, 2020.
  19. 19. P. Zakian and A. Kaveh, “Economic dispatch of power systems using an adaptive charged system search algorithm,” Applied Soft Computing, vol. 73, pp. 607–622, 2018.
  20. 20. S. Li, H. Chen, M. Wang, A. A. Heidari and S. Mirjalili, “Slime mould algorithm: A new method for stochastic optimization,” Future Generation Computer Systems, vol. 111, pp. 300–323, 2020.
  21. 21. S. Aras, E. Gedikli and H. T. Kahraman, “A novel stochastic fractal search algorithm with fitness-distance balance for global numerical optimization,” Swarm & Evolutionary Computation, vol. 61, no. 100821, 2021.
  22. 22. S. Mirjalili, “SCA: A sine cosine algorithm for solving optimization problems,” Knowledge-Based Syst, vol. 96, pp. 120–133, 2016.
  23. 23. H. Chen, A. A. Heidari, H. Chena, M. Wang, Z. Pan et al., “Multi-population differential evolution-assisted harris hawks optimization: Framework and case studies,” Future Generation Computer Systems, vol. 111, pp. 175–198, 2020.
  24. 24. Q. Zhu, X. Tang, Y. Li Michael and O. Yeboah, “An improved differential-based harmony search algorithm with linear dynamic domain,” Knowledge-Based Systems, vol. 187, no. 104809, pp. 1–14, 2020.
  25. 25. Q. Askari, M. Saeed and I. Younas, “Heap-based optimizer inspired by corporate rank hierarchy for global optimization,” Expert Systems with Applications, vol. 161, no. 113702, pp. 1–27, 2020.
  26. 26. M. A. M. Majeed and P. S. Rao, “Optimization of CMOS analog circuits using sine cosine algorithm,” in 8th International Conference on Computing, Communication & Networking Technologies (ICCCNTDelhi, India, pp. 1–6, 2017.
  27. 27. S. J. Rosli, H. Rahim, K. Najmy, R. Ngadiran, R. Ahmed et al., “Hybrid modified method of sine cosine algorithm using latin hypercube sampling with cuckoo search algorithm for optimization problems,” Electronic, vol. 9, no. 11: 1786, pp. 1–25, 2020.
  28. 28. Y. H. Feng and G. G. Wang, “Binary moth search algorithm for discounted {0–1} knapsack problem,” IEEE Access, vol. 6, pp. 10708–10719, 2018.
  29. 29. G. G. Wang, A. H. Gandomi, A. H. Alavi and D. Gong, “A comprehensive review of krill herd algorithm: Variants, hybrids and applications,'’ Artificial Intelligence Review, vol. 51, no. 1, pp. 119–148, 2019.
  30. 30. M. A. Elaziz, D. Oliva and S. Xiong, “An improved opposition-based sine cosine algorithm for global optimization,” Expert Systems Applications, vol. 90, pp. 484–500, 2017.
  31. 31. R. Sindhu, R. Ngadiran, M. Y. Yasmin, N. Adilah and M. Hariharan, “Sine – cosine algorithm for feature selection with elitism strategy and new updating mechanism,” Neural Computing Applications, vol. 28, pp. 2947–2958, 2017.
  32. 32. M. Meshkat and M. Parhizgar, “A novel sine and cosine algorithm for global optimization,” in 7th International Conference on Computer & Knowledge Engineering (ICCKEMashhad, Iran, pp. 60–65, 2017.
  33. 33. W. Long, T. Wu, X. Liang and S. Xu, “Solving high-dimensional global optimization problems using an improved sine cosine algorithm,” Expert Systems & Applcations, vol. 123, pp. 108–126, 2019.
  34. 34. S. Gupta, K. Deep, S. Mirjalili and J. H. Kim, “A modified sine cosine algorithm with novel transition parameter and mutation operator for global optimization,” Expert Systems & Applications, vol. 154, no. 113395, pp. 1–21, 2020.
  35. 35. Z. K. Feng, S. Liu, W. J. Niu, B. J. Li, W. C. Wang et al., “A modified sine cosine algorithm for accurate global optimization of numerical functions and multiple hydropower reservoirs operation,” Knowledge-Based Systems, vol. 208, no. 106461, pp. 1–21, 2020.
  36. 36. H. Nenavath, D. R. Kumar Jatoth and D. S. Das, “A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking,” Swarm & Evolutionary Computation, vol. 43, pp. 1–30, 2018.
  37. 37. S. N. Chegini, A. Bagheri and F. Najafi, “PSOSCALF: A new hybrid PSO based on sine cosine algorithm and levy flight for solving optimization problems,” Applied Soft Computing, vol. 73, pp. 697–726, 2018.
  38. 38. K. Chen, F. Zhou, L. Yin, S. Wang, Y. Wang et al., “A hybrid particle swarm optimizer with sine cosine acceleration coefficients,” Information Sciences, vol. 422, pp. 218–241, 2018.
  39. 39. H. Xian, C. Yang, H. Wang and X. Yang, “A modified sine cosine algorithm with teacher supervision learning for global optimization,” IEEE Access, vol. 9, pp. 17744–17766, 2021.
  40. 40. N. Singh and S. B. Singh, “A novel hybrid GWO-sCA approach for optimization problems,” Engineering Science & Technology an International Journal, vol. 20, no. 6, pp. 1586–1601, 2017.
  41. 41. R. M. Rizk-Allah, “Hybridizing sine cosine algorithm with multi-orthogonal search strategy for engineering design problems,” Journal of Computational Design & Engineering, vol. 5, no. 2, pp. 249–273, 2018.
  42. 42. M. Belazzoug, M. Touahria, F. Nouioua and M. Brahimi, “An improved sine cosine algorithm to select features for text categorization,” Journal of King Saud University - Computer & Information Sciences, vol. 32, no. 4, pp. 454–464, 2020.
  43. 43. M. Issa, A. E. Hassanien, D. Oliva, A. Helmi, I. Ziedan et al., “ASCA-Pso: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment,” Expert Systems with Applications, vol. 99, pp. 56–70, 2018.
  44. 44. H. Nenavath and R. K. Jatoth, “Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking,” Applied Soft Computing, vol. 62, pp. 1019–1043, 2018.
  45. 45. B. Bošković, F. Brglez and J. Brest, “Low-autocorrelation binary sequences: On improved merit factors and runtime predictions to achieve them,” Applied Soft Computing, vol. 56, pp. 262–285, 2017.
  46. 46. J. Brest and B. Bošković, “A heuristic algorithm for a low autocorrelation binary sequence problem with odd length and high merit factor,” IEEE Access, vol. 6, pp. 4127–4134, 2018.
  47. 47. K. Farnane, K. Minaoui and D. Aboutajdine, “Local search algorithm for low autocorrelation binary sequences,” in 7th International Conference on Optimization & Applications (ICOAMohammedia, Morocco, pp. 1–5, 2018.
  48. 48. F. Ghani, S. J. B. Rosli, A. H. Alif and R. B. Ahmed, “Waveform design for improved range and Doppler resolution in radar,” in 3rd International Conference on Intelligent Systems Modelling & Simulation (ISMSKota Kinabalu, Malaysia, pp. 247–251, 2012.
  49. 49. S. J. Rosli, F. Ghani, A. Hasmani, A. Ghani and R. B. Ahmed, “An iterative technique for the design of amplitude and phase modulated pulse trains for radar,” in Proceeding International Conference Control & Communication Power Engineering, pp. 124–131, 2013.
  50. 50. S. J. B. Rosli, F. Ghani, A. H. A. Ghani and B. R. Ahmad, “Synthesis of finite length multi level sequences for clutter rejection in radar,” in Fourth International Conference Computing Intelligence Communication Systems Networks, Phuket, Thailand, pp. 273–277, 2012.
  51. 51. J. M. Velazquez-Gutierrez and C. Vargas-Rosales, “Sequence sets in wireless communication systems: A survey,” IEEE Communications Surveys & Tutorials, vol. 19, no. 2, pp. 1225–1248, 2017.
  52. 52. A. N. Leukhin and E. N. Potekhin, “A bernasconi model for constructing ground-state spin systems and optimal binary sequences,” Journal Physics Conference Series, vol. 613, pp. 1–7, 2015.
  53. 53. C. Günther and K. -U. Schmidt, “Merit factors of polynomials derived from difference sets,” Journal Combinatorial Theory Series A, vol. 145, pp. 340–363, 2017.
  54. 54. W. H. Mow, K. Du and W. H. Wu, “New evolutionary search for long low autocorrelation binary sequences,” IEEE Transactions Aerospace & Electronic Systems, vol. 51, no. 1, pp. 290–303, 2015.
  55. 55. A. Ukil, “On asymptotic merit factor of low autocorrelation binary sequences,” in 41st Annual Conference of the IEEE Industrial Electronics Society (IECONYokohama, Japan, pp. 004738–004741, 2015.
  56. 56. T. Packebush and S. Mertens, “Low autocorrelation binary sequences,” Journal Physics A: Mathematical & Theoretical, vol. 49, no. 16, pp. 165001, 2016.
  57. 57. R. Lin, M. Soltanalian, B. Tang and J. Li, “Efficient design of binary sequences with low autocorrelation sidelobes,” IEEE Transactions Signal Process, vol. 67, no. 24, pp. 6397–6410, 2019.
  58. 58. M. Dimitrov, T. Baitcheva and N. Nikolov, “Efficient generation of low autocorrelation binary sequences,” IEEE Signal Processing Letters, vol. 27, pp. 341–345, 2020.
  59. 59. M. J. E. Golay, “The merit factor of long low autocorrelation binary sequences (Corresp.),” IEEE Transactions Information Theory, vol. IT-28, no. 3, pp. 543–549, 1982.
  60. 60. K. A. Rani, M. Abdulmalek, H. A. Rahim, N. S. Chin and A. A. Wahab, “Hybridization of strength pareto multiobjective optimization with modified cuckoo search algorithm for rectangular array,” Scientific Reports, vol. 7, no. 46521, pp. 1–19, 2017.
  61. 61. I. Adam, M. F. A. Malek, M. N. M. Yasin and H. A. Rahim, “Double band microwave rectifier for energy harvesting,” Microwave & Optical Technology Letters, vol. 58, no. 4, pp. 922–927, 2016.
  62. 62. H. A. Rahim, M. Abdulmalek, P. J. Soh and G. A. E. Vandenbosch, “Evaluation of a broadband textile monopole antenna performance for subject-specific on-body applications,” Applied Physics A: Materials Science & Processing, vol. 123, no. 97, pp. 1–6, 2017.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.