An Improved Jellyfish Algorithm for Multilevel Thresholding of Magnetic Resonance Brain Image Segmentations

: Image segmentation is vital when analyzing medical images, espe-cially magnetic resonance (MR) images of the brain. Recently, several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation; however, the algorithms become trapped in local minima and have low convergence speeds, particularly as the number of threshold levels increases. Consequently, in this paper, we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm (JSA) (an optimizer). We modify the JSA to prevent descents into local minima, and we accelerate convergence toward optimal solutions. The improvement is achieved by applying two novel strategies: Ranking-based updating and an adaptive method. Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions. We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution; we allow a small amount of exploration to avoid descents into local minima. The two strategies are integrated with the JSA to produce an improved JSA (IJSA) algorithm in terms of the FSIM, the PSNR, the objective values, and the SD; in terms of the SSIM, IJSA was competitive with the others.


Introduction
Image segmentation has attracted considerable interest due to the need to separate similar regions within an image to facilitate image processing [1] and computer vision [2]. In treating brain tumors, magnetic resonance (MR) images yield invaluable data on tumor location and complexity [3]. However, to provide those benefits, the images must be accurately segmented to allow rapid diagnosis [3]. Segmentation techniques applied to such images include feature selection-based clustering [4], region detection [5], edge detection [6], and threshold segmentation [7].
Segmentation based on thresholding is used commonly for many applications because of its simplicity, ease of implementation, and high segmentation quality. Broadly, threshold segmentation is classified into bi-level thresholding (which only separates an image into a background and foreground) and multilevel thresholding (which is commonly used to separate more than two homogenous objects within an image). The principal problem associated with traditional multilevel thresholding is that the time required grows exponentially with the increasing number of threshold levels (objects) in the segmented image. The traditional techniques used to optimize thresholding are either parametric (using a probability density function to estimate the parameter values of each class in a segmented image) or non-parametric [8] (optimal threshold values are obtained using entropy [9], fuzzy entropy [10], or the Otsu method [11,12]), and must be maximized without the aid of statistical parameters.
Given the time demand when estimating optimal thresholds using traditional techniques, meta-heuristic algorithms, which efficiently solve several optimization problems within reasonable times [13][14][15][16][17][18][19][20], have become widely used to estimate thresholds and several researchers have segmented MR images; some of which are reviewed here. In [3], three methods based on particle swarm optimization (PSO) were used to estimate optimal brain segmentation thresholds for MR images: PSO, the Darwinian PSO (DPSO), and the Functional Order DPSO (FO-DPSO). An adaptive PSO [21] has also been employed to find thresholds, by maximizing the Otsu method. The extracted features were used to train a convolutional neural network to detect brain tumors automatically.
Das et al. [22] segmented MR images using the Quantum-Inspired Modified Genetic Algorithm (QIANA). The modified algorithm was compared with the classical version; the former performed better in terms of the time required and the segmented image quality. Adaptive winddriven optimization (AWDO) [23] was developed to optimize thresholding of brain MR images utilising the Otsu method as the objective function. AWDO was experimentally compared with other optimization algorithms: The RGA, GA, Nelder-Mead simplex, PSO, BF, and ABF algorithms. In [24], an ant colony optimization algorithm was enhanced (by determining the ant's direction and the probability of its next location) and used to segment brain MR images and diagnose tumors.
Differential evolution (DE) [25] has been adapted to optimize multilevel thresholding of brain MR images with the performance was improved by employing new mutational methods and a novel adaptive approach to the equilibria of both the exploration and exploitation operators. This adaptive DE was empirically compared with three other DE algorithms using T2-weighted brain MR images; the new algorithm performed best in terms of both computational cost and segmented image quality. Other approaches used for brain MR image segmentation include the hybrid firefly algorithm [26], teaching/learning-based optimization [27], artificial bee colony optimization [28], an adaptive bacterial foraging algorithm [29], and improved quantum PSO [30].
To overcome the attraction to local minima and low convergence speeds that compromise the performance of many algorithms during brain MR image segmentation, we develop here a new, multilevel thresholding method based on the jellyfish search algorithm (JSA) (an optimizer) [31] which has been modified to enhance outcomes. The JSA is simple, easy to implement and modify, and effective in terms of mathematical optimization. Our improved JSA (IJSA) uses a rankingbased update method to prevent descents into local minima and a new adaptive method to widen and deepen exploitation around the best-so-far solution. JSA and IJSA were extensively validated using seven brain MRI images, and compared with each other and some recent robust optimization algorithms, including the marine predator algorithm (MPA) [30], the improved MPA (IMPA) [30], the salp swarm algorithm (SSA) [32], the modified SSA (MSSA) [29], and the equilibrium algorithm (EO) [33]. The experimental data proved that the IJSA was best in terms of the peak signal-to-noise ratio (PSNR), the features similarity index (FSIM), the standard deviation (SD), and fitness; and was competitive in terms of the Structured Similarity Index Metric (SSIM).
The remainder of the paper is organized as follows. Section 2 presents the multi-threshold image segmentation problem using the Otsu method. Section 3 describes the JSA. Section 4 explains the proposed algorithm. Section 5 discusses the outcomes of experiments. Section 6 concludes and discusses future works.

Otsu Method
In [11], the Otsu method was used to extract optimal threshold values for an image in which homogenous regions are extracted by maximizing the between-class variances among regions. Mathematically, assume that an image has m + 1 homogenous regions that are mutually separated by m threshold values [t 0 , t 1 , t 2 , . . . , t m ] derived using a fitness function: where : where the σ i 2 ∀ i ∈ 0 : m values are the variances; the ω i ∀ i ∈ 0 : m values the regional probabilities; the μ i ∀ i ∈ 0 : m values the mean levels of the separated regions; the μ T values the mean levels of the whole image; L the maximum grey level; and the probability distribution of gray level I is symbolized as p i and calculated as p i = (h i /NP), such that h i is the count of pixels with a gray level of i and NP the total number of image pixels. Eventually, below, the optimal threshold values will be extracted by maximizing Eq. (1) as the objective function of the proposed algorithm.

The Artificial Jellyfish Search Algorithm (An Optimizer)
Recently, a new meta-heuristic algorithm [31] based on the behaviour of jellyfish, the artificial Jellyfish Search Algorithm (JSA) (an optimizer), was developed for mathematical optimizations. Jellyfish search for food by exploiting both ocean currents and movements within a swarm over time.

Initialization Step
In this phase, solutions are efficiently distributed within the search space of the problem to cover all of the space. The algorithm thus does not descend terminally into local minima and we accelerate convergence toward optimal solutions. We evaluated several chaotic maps and sought to optimize population initialization within the search space. The logistic map [34] was optimal.

Optimization
Early in optimization, the initialized solutions are compared in terms of quality, and the fittest chosen as the food location X * . Eq. (9) is then used to update the solution in the direction of the ocean current: where r and r 1 are a vector and a number between 0 and 1 respectively; . * is element-wise multiplication; and β > 0 is a distribution coefficient (β = 3, as recommended) [31]. μ is the mean of all solutions of the current iteration. Movements inside the swarm may be positive (the jellyfish explores around its location), computed using Eq. (7) or simply computed using Eq. (8): r 3 is a random number between 0 and 1, and γ > 0 is the motion length around the current location. U b and L b are the search boundaries of the problem and, thus, are the upper and lower bounds respectively.
r is a vector with a random value between 0 and 1. D is mathematically formulated: j is the index of a solution selected at random. To model the trade-off between the ocean current and the passive and active motions, a predefined constant c 0 and the time control function mathematically described in Eq. (10) is used, which yields the time control mechanism depicted in Fig. 1.
where t refers to the current evaluation, t max to the maximum evaluation, and r is a random number between 0 and 1.
As shown in Fig. 1, when c (t) ≥ c 0 , the solutions are updated in the ocean. When the random number r 4 that lies between 0 and 1 is greater than (1 − c (t)), the current solution is updated using passive motion; otherwise, active motion is employed. Fig. 2 shows these steps.

The Proposed Approach
This section discusses IJSA initialization, ranking-based updating, and the use of a novel adaptive strategy.

Initialization
During initialization, a group of N solutions, each of m dimensions, is randomly generated and randomly initialized within the search space of the problem using Eq. (11): where r is a random number between 0 and 1.

Ranking-Based Updating Method (RUM)
Our recently published RUM [30] ranking strategy was here used to replace ineffective solutions with others created by a novel updating scheme. Ineffective solutions are those that remain no better than the current local best solution over NS consecutive iterations. This scheme was used here to search extensively around the best-so-far solutions in efforts to improve the convergence speed and avoid early local minima. The mathematical model is: where a and b are two random indices between 0 and N and r is a vector with a random value between 0 and 1.

A Novel Adaptive Strategy
We developed a new strategy to facilitate both the exploration and exploitation capacities of the proposed algorithm. The basic algorithm explores extensively, reducing the convergence speed toward the best solution and thus the qualities of the obtained solutions. Using our adaptive strategy, current solutions are updated around the best-so-far solutions at two different step sizes; such exploitation improves the outcomes and reduces the probabilities of descents into local minima. The first step size is based on a random distance between two solutions for a partial population, thus not on all dimensions of the current solutions. The random distance is created between U b and L b ; the step size is mathematically described as: where a and b are two indices randomly selected between 0 and N, and r 1 is a random vector between 0 and 1. The second step size is based on the distance between the best-so-far solution multiplied by twice a number created randomly between [0, 1], and the mean of the solutions at iteration t is obtained using the following equation: where μ is the mean of all solutions, r 2 a random vector between 0 and 1, and r 3 a random number between 0 and 1. The trade-off between the two step sizes is based on a predefined probability P that is set (based on extensive experimentation) to 0.85. The steps of the IJSA are described in Algorithm 1.
Update the current solution using Fig. 1.
Update x * if there is a better 17.
Generate a number r randomly between 0 and 1.

Experiments and Discussion
Here, the proposed algorithm is verified and compared to other algorithms using various performance metrics. We used a Windows 10 platform with a 64-bit operating system; an Intel(R) Core (TM) i7-4700MQ CPU@2.40 GHz; 32 GB of RAM; and MATLAB R2019a for implementation.

Parameter Settings
We compared our proposed algorithm with the MPA [30], the improved MPA (IMPA) [30], the SSA [32], the modified SSA (MSSA) [32], and the EO [33]. The parameters used for those algorithms were those of the cited reports. The parameters used for the standard JSA were those of the original report which were also used for IJSA. Additionally, the IJSA requires a parameter P that maximizes performance. We conducted extensive experiments to determine that the best value was 0.85. The N and t max values were 25 and 100, respectively. The parameters of the proposed algorithm are summarized in Tab. 1.

Evaluation Metrics
We used four objective performance criteria to compare algorithms: SD, PSNR, FSIM, and SSIM. The PSNR, SSIM, and FSIM details follow: • The PSNR [35] metric was used to calculate segmented image quality employing the following equation: PSNR = 10 log 10 255 2 MSE (15) where MSE indicates the mean squared error, calculated as follows: where O (i, j) and S(i, j) are the source and segmented images, respectively.
• The SSIM [35] metric was used to measure the difference between the structures of the segmented and source images, thus: where μ o and μ s are the mean intensities of the source and the segmented images; σ o and σ s the SDs of the source and segmented images; σ os the covariance between the source and segmented images; and a and b fixed values (0.001 and 0.003 respectively).
• The FSIM [36] was used to calculate the feature similarity between the segmented and original images. Mathematically, the FSIM is: where refers to the entire image spatial domain. S T (X ) is a measure of the similarity between the source and segmented images [37]. PC is the phase congruency and serves as the primary feature.
The best algorithm has the highest PNSR, FSIM, and SSIM values.

Performance Evaluation of the Proposed Algorithm and Other Algorithms
After running each algorithm 40 times for each test image at each threshold level, the average fitness, PSNR, FSIM, SSIM, and SD values of all test images at various threshold levels were calculated (Figs. 4-8). The IJSA was best in terms of fitness and with regard to PSNR, FSIM, and SD. Unfortunately, it was outperformed by the IMPA and MPA in terms of the SSIM. This is an important limitation that will be addressed in future work. Thus, the IJSA identified thresholds that more accurately and efficiently separated homogenous regions within images than did other algorithms, as revealed by four performance metrics, but not the SSIM. Our proposed algorithm was nonetheless the best because it was optimal in terms of most performance metrics.  The objective values yielded by each algorithm on 40 independent runs of images Br1, Br9, and Br3 were experimentally compared with those of the IJSA using the Mann-Whitney U-test [38] at a confidence level of 5%, and the outcomes are presented in Tabs. 2-4. The p-value indicates whether the null hypothesis was accepted (the outcomes did not differ) or not (the outcomes differed). This was the case when p < 0.05; otherwise, the null hypothesis is accepted. In the tables, the columns labeled h feature only two values: 0 indicates acceptance of the null hypothesis and 1 non-acceptance. The alternative hypothesis was accepted for most threshold levels, confirming that IJSA was superior to the other algorithms.      We have presented a new medical image segmentation algorithm based on the novel JSA meta-heuristic method, the performance of which was improved to extract the optimal thresholds of brain MR images. The improvement to IJSA was based on two novel strategies: ranking-based updating, and a new adaptive method. Ranking-based updating was used to replace inappropriate solutions within populations with others generated via novel updating to improve the qualities of solutions that were removed. The new adaptive strategy (minimally) promotes JSA exploitation around the best-so-far solution to avoid descents into local minima. The two strategies were integrated effectively with the JSA to yield the IJSA that optimally thresholded brain MR images. Seven brain MR images were segmented at threshold levels of 3,4,5,6,7,8,10,15,20,25, and 30. The IJSA was compared with several recent segmentation algorithms, including the improved and standard MPAs, the modified and standard DDAs, EO, and the standard JSA, in terms of fitness, SSIM, PSNR, SD, and FSIM. The experimental outcomes and the Wilcoxon rank-sum test proved the superiority of the proposed algorithm in terms of all measures except the SSIM, for which the IJSA was competitive. In future work we will link the new algorithm to a conventional neural network to improve the classification accuracy of brain tumor MR images. Ethical Approval: This article does not contain any studies with human participants or animals performed by any of the authors.

Conflicts of Interest:
The authors declare that they have no conflicts of interest to report regarding the present study.