[BACK]
Computers, Materials & Continua
DOI:10.32604/cmc.2022.019691
images
Article

Fuzzy Based Hybrid Focus Value Estimation for Multi Focus Image Fusion

Muhammad Ahmad1,*, M. Arfan Jaffar1, Fawad Nasim1, Tehreem Masood1 and Sheeraz Akram2

1The Superior University Lahore, 54000, Pakistan
2University of Pittsburgh, 15213, USA
*Corresponding Author: Muhammad Ahmad. Email: ahmadkahloon@superior.edu.pk
Received: 22 April 2021; Accepted: 26 August 2021

Abstract: Due to limited depth-of-field of digital single-lens reflex cameras, the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of focus appear as blurred (out-of-focus) in the image. Multi-Focus Image Fusion can be used to reconstruct a fully focused image from two or more partially focused images of the same scene. In this paper, a new Fuzzy Based Hybrid Focus Measure (FBHFM) for multi-focus image fusion has been proposed. Optimal block size is very critical step for multi-focus image fusion. Particle Swarm Optimization (PSO) algorithm has been used to find optimal size of the block of the images for extraction of focus measure features. After finding optimal blocks, three focus measures Sum of Modified Laplacian, Gray Level Variance and Contrast Visibility has been extracted and combined these focus measures by using intelligent fuzzy technique. Fuzzy based hybrid intelligent focus values were estimated using contrast visibility measure to generate focused image. Different sets of multi-focus images have been used in detailed experimentation and compared the results with state-of-the-art existing techniques such as Genetic Algorithm (GA), Principal Component Analysis (PCA), Laplacian Pyramid discrete wavelet transform (DWT), and aDWT for image fusion. It has been found that proposed method performs well as compare to existing methods.

Keywords: Fuzzy logic; multi-focus image fusion; defocus; focus; contrast visibility; focus measure

1  Introduction

There is a significant role of image fusion in current state of the art technology such as Robot Vision, Object Recognition, Target Detection, Satellite Imaging, surveillance and medical. The aim of the image fusion is to construct a single image that can get maximum details from the input images having different focuses [1], similarly in block-based fusion techniques, the focus quantification in the image blocks is done using several types of features such as Sum modified Laplacian (SML), Variance, Energy of Image Gradient and Spatial Frequency (SF). The image fusion process is conducted at pixel level under the spatial domain by observing such laws, while the transformed domain gets the advantage of properties prevailing at various resolution levels. In Section 2 on centered and defocused image creation, an overview has been discussed in depth. At the pixel, accessibility and judgment levels [2], the synthesis of information can take place and has its own limits. Fusion at pixel level is done under some rules. In feature-based fusion techniques, feature values of the respective blocks of images are computed and the winning block is chosen for the fused image.

Several multi-focus image fusion techniques have been reported in the literature including image fusion using Laplacian pyramid [3], image fusion using PCA [4], image fusion based on wavelet transform [5] and image fusion based on advanced DWT [6]. Wavelet transform has the advantage of providing directional information over pyramid decomposition that can provide resistance to intense changes in the input images. However, it is linear in nature [7] and signal decomposition loses the original data [8].

PCA based techniques are computationally efficient, but they do not generate the required results for all datasets. Edges become flat in low pass filtering of wavelets and cause decreased contrast in the fused image. SML-based fusion techniques generate better results, but there are computationally expensive [9]; this problem is tackled using bi-lateral gradient-based sharpness criterion. In the recent literature, for multi-focus image fusion in multi-scale environment, we can see many techniques such as discrete cosine harmonic wavelet transform (DCHWT) [10], DCT using variance and consistency verification [11], adaptive block in wavelet domain [12], cosine transform (DCT) using variance [13], guided image filter (GFF) and cross bilateral filter (CBF) [14]. It is important to mention that these techniques are computationally expensive. Fused images generated by DCHWT may suffer from the problem of blocking effects and use of adaptive blocks in DWT may affect in generating better fused images. In [15], a new criterion of sharpness is proposed to enhance the perception of scene using multi-focus image fusion based on multiple images. The morphological wavelet operations are performed for multi-focus image fusion and results are evaluated based on image gradient in [16]. In [17], the salient image regions are detected using segmentation, compression and objection recognition techniques. The weight map is constructed to identify visually important regions in [18]. In [19], the gray-level similarities and geomatic similarities at pixel level are exploited for multiple image fusion. The hierarchical pyramid approach is used in image fusion to preserve the visual perception in the image in [20].

Information extraction from the image may involve different phases, for example image enhancement, restoration and segmentation. It is essential that the image during the processing should remain clear, especially those parts of the image that contain important information. These processing requires an all-in focus image. Due to same limitations of the image acquisition environment, the images captured are not all-in focus. These can be due to external and internal restrictions of the image acquisition instruments. Some of their limitations are low lighting conditions, limited focusing power of the cameras and motion of some objects within the scene. These can cause blurriness in the acquired image. Consequently, some of the parts in the acquired image get defocused making the necessary details obscure. One of the possible solutions is image fusion. It is the process of consolidating all partially focused images of the same scene into a single composite image. In Fig. 1, basic image creation process has been shown based on paraxial-geometric optics. In Figs. 6 to 10, show some pairs of multi-focused images. Light rays from any point P of an object in the scene fall at lens following their deflection them to converge at point P′ on the image plane. Location of point P and its position P′ on the image plane are defined by eminent Gaussian lens formula as given below.

1f=1u+1v (1)

Here f represents focal length. u And v are displacement of object and focused image from the lens plane, respectively.

The location of a point in the focused image can be distinctively identified based on its radiance and location in the scene hence the locations of point within object and its image are similar. It reveals the fact that a focused image is generated if the image detector matches with the image plane, and this is s = v situation. In s = v situation, we get a clear image P′ of the point P in the scene but a blurred image P″ is created as when s! = v shown in Fig. 1. These parameters f, s and v specify the amount of business in a defocused image. The formation of a defocused image has been shown in Fig. 2. Based on the attributes s, f and aperture diameter D, the parameters of camera can be defined using following equation.

e=(s,f,D) (2)

images

Figure 1: Image creation mode

images

Figure 2: Defocused image

Typically, the sensors are planer image detector like CCD arrays i so if the objects are curved in shape; the image will be partially focused and partially defocused. This suggests that certain parts of the picture are in focus while the others are out of focus.

Based on Gaussian model, defocused image represented by g(x,y) can be modeled by convolving Point Spread Function (PSF) of the camera with the focused image (x,y). It is expressed by following equation.

g(x+y)=h(x+y)(x+y) (3)

Here the sign ⊕ represents convolution operator.

From Eq. (3), it is clear that defocusing acts as a low pass filter causing reduced bandwidth with greater defocusing effect. The object in an image becomes out of focus due to three reasons.

(a)   The sensor and image plane are not aligned

(b)   The lens is not static

(c)   The object and object plane do not remain aligned

2  Proposed Methodology: Fuzzy Based Hybrid Focus Measure (FBHFM)

There are some problems which occur due to movement in lens or sensor with reference to each other. First, it results into varying magnification of the system and hence the image coordinates of in-focus positions on the object are changed. Second, image brightness varies due to changing sensor area over which light energy is spread.

The main purpose of fusion process is to integrate all the relevant and important information from different images having different objects or areas that are out of focus into a single one with minimal computational cost. The focused parts of input images are placed in fused image. We have presented a novel fuzzy classifier which is based on hybrid focus measure (Fig. 4). It effectively selects the focused parts from input images with low computational cost as compare to existing techniques. In order to choose the focused areas from input images, it is very significant to use such features that can distinguish every pixel in terms of visibility, sharpness and variance within a window. We have used SML and Gray Level Variance (GLV) in addition to Contrast Visibility (CV). The description of these features is given in next section. Block size is the most important for fusion. First the input images have been divided into blocks. It has been noticed through experimentation, that different block sizes work well for different images. There is no specific rule that which size of the block is best for fusion. Sometimes small sizes work well for specific type of images and some times larger block sizes perform well. Therefore, this is most important for the optimal selection of the block sizes. In this paper, we have used same method that was proposed using PSO. We have experimented same method and it works well in our case. Therefore, we have not provided detail of that method. We have just provided short description of that method here.

2.1 PSO Based Optimal Block Size Selection

Particle swarm optimization (PSO) is very accurate optimization algorithm presented in 1995 by J. Kennedy, to deal with continuous optimization problem [21]. This algorithm repeatedly tries to improve candidate solution. Inspired with the nature, each bird is called particle and its velocity and position is initialized randomly. Every solution of the problem is considered to be the particle. Each solution is assigned a fitness value and this value is assessed and optimized through fitness function. The location and velocity of each particle is updated according to Eqs. (4) and (5).

xidxid(t+1)=xid(t)+vid(t+1)(t+1)=xid(t)+vid(t+1) (4)

vid(t+1)=wvid(t)+c1r1(pbidxid(t))+c2r2(gbidxid(t)) (5)

Here d=1,2d where d is dimension. w is inertia weight and i=1,2n where n is used to represent the population size. It has been observed through detailed experimentation that smaller blocks tend to provide more accurate fusion results than bigger blocks. The smaller size blocks can more precisely distinguish the blurred area from unblurred parts as compared to the blocks of bigger size.

In order to search for an optimal block size as showed in Fig. 3, we divide the search space into three buckets 1–1/16, 1–1/8 and 1–1/4 and initialized 50% of the population from 1/16, 30% of population from 1/8 and 20% of the population from 1/4 of search space. Optimization process becomes faster through this initialization.

Tab. 1 shows the parameters and their values set for the experiments. Population size is set to 15 for all the input images for the experiment. c1 and c2 represents the social and cognitive behavior and are set to 2. Number of generations N is set to 10. Inertia weight ω is fixed to 0.3 and maximum velocity update vmax for a particle is set as 15. We noticed that if we exceed these values of c1 and c2 from 2, the velocity of particle changed by a bigger factor and the particle takes a long jump which results in skipping the significant region and boundaries.

images

Figure 3: Population initialization

images

images

There can be many stopping criteria to control the optimization process such as number of generations completed. Optimization process can be stopped if maximum fitness is achieved or if fitness remains constant over many iterations. We limited the number generations to control the optimization process. It was observed during experimentation that the fitness value of the fused image increases in 5 to 10 generations and remains same or starts decreasing after this.

2.2 Sum Modified Laplacian

For each pixel within window, the measure based on second derivative is used to calculate focus value or sharpness. Laplacian operator may generate the components opposite in signs that cancel each other, hence producing zero output. This problem is dealt by taking energy of Laplacian operator. The window size is kept small for better performance.

F=i=1Wj=1W(|2g(i.j)i2|+|2g(i.j)j2|) (6)

Here W represents the window size.

2.3 Gray Level Variance

A greater variance in the gray levels within a region represents its sharpness level, hence this feature has been efficiently used to calculate focus value within a small sized window.

F=1W2i1Wj=1W(g(i,j)μ)2 (7)

Here W and μ are the size of window and mean of window respectively.

2.4 Contrast Visibility

Contrast visibility is variance in luminance that causes the object unique [5]. The blurriness makes it hard to distinguish between a low contrast and high contrast. Contrast visibility estimates the deviation of window pixels from the window’s mean value and checks the clarity within the window.

F=1m×n(i.j)Wk|I(I,J)μk|μk (8)

m × n is the size of the window and μk shows the mean value.

2.5 Fuzzy-Based Fused Value Estimation

We will discuss hybrid fusion rule module based on fuzzy logic in this section. To evaluate the strength and weakness of FBHFM, we will first calculate gray level variance, contrast visibility, and sum modified Laplacian. We will integrate all these rules into single fusion rule to get resultant fused image. We have used a fuzzy rule-based classifier in order to get fused image. Fuzzy if-then system is considered to be the easiest fuzzy classifier. Outcome of each fusion rule is labeled as a class. Classification rules of this classifier is represented as linguistic rules following:

(1)   IF SF1 is large AND SF2 is small THEN F2 is output.

(2)   IF SF1 is large AND SF2 is large THEN F1 is output.

(3)   IF SF1 is medium AND SF2 is small THEN F2 is output.

(4)   IF SF1 is medium AND SF2 is large THEN F1 is output.

(5)   IF SF1 is small THEN F3 is output.

Here SF1 = |SF11 − SF12| and SF2 = |SF21 − SF22|

To calculate the extent of relativeness of intensity of each pixel to fuzzy set, we defined number of membership methods. This membership function shows the mapping of knowledge X to 0, 1 i.e. X [0, 1]. A membership function is represented by each linguistic value and this membership function is made adaptively by analyzing spatial frequency in order that nearest values are often chosen for fusion and detail preservation capability are often achieved. Following is equation of fuzzy membership function of trapezoidal-shape as discussed above:

f(x;a,b,c,d)={0,if xaxaba,if axb1,if bxcdxdc,if cxd0,if dx (9)

a,b,c, and d are scalar parameters. a, d represents feet and b, c identifies shoulders of trapezoidal. x vector contains input values for the trapezoidal function.

Following are the equations to calculate a, b, c, and d (Fig. 5).

a=k1×min{μi,μg} (10)

b=k2×mix{{μi,μg}} (11)

c=k3×b (12)

d=k4×c (13)

where k1, k2, k3, and k4 are adjusting parameters. They depend upon the estimated focus level represented by variance of spatial frequency σg and constant values, μi and μg are the mean of a local neighborhood centered around a pixel i with the radius Ri for both images.

k1 measures a that locate the left foot of the trapezoidal, used in fuzzy set construction process 2.9 × σg. k2 measures b which locate the left shoulder of the trapezoidal and used in fuzzy set construction process [1.3 + 0.72 × σg]. k3 measures c that finds the right shoulder of the trapezoidal and used in fuzzy set construction process and set to 2.3. k4 measures d which locate the right foot of the trapezoidal that used in fuzzy set construction process and set to 1.7. Now, the fuzzy membership (weight) of all focus measures is computed by using following equations

w1=f(μi;a,b,c,d) (14)

w2=f(μi;a,b,c,d) (15)

w3=f(μi;a,b,c,d) (16)

where w1, w2 and w3 shows near optimal contributions of the focus measures, respectively. The AND operator is used minimum, but any other t-norm can be used. We have selected algebraic product for the AND operation. The rules “vote” for the class of the consequent part. The weight of this vote is wi. The votes of all rules are summed to determine the output of classifier. We used average aggregation method. Finally, fused image value f(x, y) is obtained by using following equation:

f(x,y)=(w1×F1+w2×F2+w3×F3)3 (17)

3  Spatial Frequency (SF)

Spatial Frequency measures the activity level in image. We have used spatial frequency to define the rule base for fuzzy classifier. Following is equation to calculate it.

SF=(RF)2+(CF)2 (18)

images

Figure 4: Block diagram of FBHFM

where

RF=1m×ni=1mj=2n[W(i,j)W(i,j1)]2 (19)

CF=1m×ni=1mj=2n[W(i,j)W(i1,j)]2 (20)

Here W is the window of m × n size. Large value of spatial frequency represents the huge detail in the window.

images

Figure 5: Fuzzy membership function

4  Observations and Results

In order to see the efficiency of our fusion technique with existing techniques, we used two different sets of quantitative measures. These measures are used when reference image is provided and when it is not available within the existing datasets. The latter category belongs to area of blind image fusion and hence it's tougher task. These sets of quantitative measures are described in sub-sections.

4.1 Performance Measures

4.1.1 Mutual Information (MI)

Mutual information is an important measure which is often used in multi-focus image fusion. It determines the amount of information placed in fused image from partially focused input images. In the case when reference image is available, following equation is used to find mutual information between reference and fused images.

MI=i=1mj=1nhR.F(i,j)log2[hR.F(i,j)hR(i,j)hF(i,j)] (21)

Here hR,F shows normalized joint grayscale histogram of R and F whereas hR, hF are their normalized grayscale histograms. For the case of blind image fusion where reference image is not available, mutual information can be calculated as the sum of MI(I1,F) and MI(I2,F) where I1 and I2 are partially focused input images. A large value of mutual information represents good fusion result.

4.1.2 Entropy

Entropy gives rate of change in gray levels within an image. Following is the equation for entropy.

H=i=2L1hF(i)log2hF(i) (22)

Here hF is grayscale histogram and L total gray level of the image.

4.1.3 Similarity Index (SI)

SI is used to determine the similarity between any two images. In image fusion, it is used between reference and fused images. Following is the equation.

SI=2CrfCr+Cf (23)

where

Cr=i=1mj=1nR(i,j)2 (24)

Cf=i=1mj=1nF(i,j)2 (25)

and

Crf=i=1mj=1nR(i,j)2F(i,j) (26)

Correlation between two images varies from 0 to 1 depending upon the similarity between the images. A small correlation value shows high dissimilarity between the images and vice versa.

4.1.4 Standard Deviation (SD)

In the fused image, Standard Deviation is used to check the contrast. A greater value represents well contrast image.

SD=i=0L(ii)2hF(i)where i=i=0LihF (27)

Here hF is the normalized histogram of fused image and L is the number of gray levels.

4.2 Images Dataset

We experimented on various multi-focus image datasets such as Pepsi, flower, lab, clock, and balloons. They are available at online.

4.3 Comparison Methods

Performance of our proposed algorithm is also evaluated against other spatial domain multi focus image fusion algorithms method. We adopted default settings for the parameters for all of these methods. Efficiency of the FBHFM can be determined qualitatively by visual inspection and quantitatively by using fusion matrices.

4.4 Qualitative Analysis

A comprehensive comparison between proposed FBHFM technique and existing techniques based on qualitative analysis is given using five different datasets including clock, Pepsi, flower, lab and balloon. An illustration of these imaging datasets is shown in Fig. 1. The magnified parts of fused images generated by proposed method FBHFM and different multi-focus image fusion techniques are given in Figs. 6 to 10 to examine their visual quality. The visual quality of fused Pepsi image is characterized by means of sharpness, clarity of objects such as lines, edges, text etc., and the collection of focused regions from the set of input multi-focus images. The proposed method FBHFM offers better focused regions in Fig. 6 as compare to existing techniques. These images contain the text. Our proposed method produced greater sharpness and edges along the text which quite reveal the efficiency of our technique. It results into the enhanced readability of the text. In the above figure left side of the Fig. 7a is focused and right side of the image is not clear on the other hand right side of the Fig. 7b is in focused and left side of the image is not clear. The visual quality of fused flower image is enhanced by means of sharpness, clarity of objects such as leaves and steams by improving contrast image. The proposed method FBHFM offers better focused regions as compare to existing techniques as observed in Fig. 7c. Due to effective hybrid focused value estimation in the input images, the better visual quality provided by proposed method can be observed in Fig. 7.

images

Figure 6: Pepsi dataset (a) left focus (b) right focus (c) results of proposed method

images

Figure 7: Flower dataset (a) left focus (b) right focus (c) results of proposed method

We have used balloon images available in Fig. 8 which provide visual comparisons of the fused images. In these images line of the texture on the balloons was not clear in Figs. 8a and 8b. The proposed method FBHFM produce sharpness and clarity on these texture that perform the better result in Fig. 8c. The Fig. 9c shows the fused image by applying FBHFM. Fused image by this technique clearly preserved miner details such as text, numbers and dots in smaller clock Fig. 9a while vertical lines in right bigger clock Fig. 9b. Fused image by FBHFM perform the better result as compare to the existing technique as shown in Fig. 9c. In Fig. 10a clock is focused and another picture is defocused. In Fig. 10b clock is defocused we cannot see the text on clock clearly and another environment of lab is clear. proposed method FBHFM apply on these two images and better results produce in Fig. 10c as compare to the other techniques.

images

Figure 8: Ballon dataset (a) left focus (b) right focus (c) results of proposed method

images

Figure 9: Clock dataset (a) left focus (b) right focus (c) results of proposed method

images

Figure 10: Lab dataset (a) left focus (b) right focus (c) results of proposed method

4.5 Quantitative Analysis

Only visual inspection is not capable claim better performance of any fusion algorithm, and it should be verified by quantitative chemical analysis. For this purpose, five different quantitative measures namely entropy, mutual information, similarity index, spatial frequency and variance is getting used to gauge the accuracy and performance of our method. These quantitative measures are discussed in Section 5.1. The Tab. 2 shows a quick comparison of our proposed method and already existing techniques using these statistical measures. These results also are given within the sort of bar charts the Figs. 11 to 15 for better understanding. These results show the preeminence of proposed algorithm FBHFM over the prevailing fusion techniques.

images

As shown in the graph, different algorithms have been compared on Standard Deviation. Fig. 11 shows results on our datasets. All comparison methods perform well and achieve better result. Proposed method shows that it achieves maximum standard deviation even more than 54 that is highest as compare to all other existing methods. Standard deviation shows good contrast on higher values so proposed method shows better contrast as compare to other existing methods.

As shown in the graph, different algorithms have been compared on entropy measures. Fig. 12 shows results on our dataset. All comparison methods perform well and achieve good values. Proposed method shows that it achieves maximum entropy even more than 12 that is highest as compare to all other existing methods. Entropy gives high rate of change in gray levels within an image.

images

Figure 11: Comparison of standard deviation on datasets

images

Figure 12: Comparison of entropy on datasets

As shown in the graph, different algorithms have been compared on Spatial Frequency. Fig. 13 shows results on dataset given by us. All comparison methods perform well and achieve best result. Proposed method shows that it achieves maximum Spatial Frequency even more than 35 that is highest as compare to all other existing methods.

images

Figure 13: Comparison of spatial frequency on datasets

As shown in the graph, different algorithms have been compared on Similarity Index. Fig. 14 shows results on our dataset. All comparison methods perform well and achieve different values. Proposed method shows that it achieves maximum Similarity Index as compare to all other existing methods. We use similarity index between reference image and fused image to get maximum similarity and more optimal result.

images

Figure 14: Comparison of similarity index on datasets

As shown in the graph, different algorithms have been compared on Mutual Information. Fig. 15 shows results our dataset. All comparison methods perform well and achieve values. Proposed method shows that it achieves maximum Mutual Information even more than 20 that is highest as compare to all other existing methods. Mutual Information is typically used in multi-focus image fusion. It gets the sum of data from partially focused input images and insert in the fused image and compare it with reference image.

images

Figure 15: Comparison of mutual information on dataset

5  Future Work

We proposed hybrid focus value estimation algorithm based on fuzzy logic for multi focus images. Focus value of every pixel within a block with optimal size obtained using PSO is estimated based on contrast visibility, sum modified Laplacian and gray level variance. The fuzzy rules are constructed based on spatial frequency. Approximating focus value using fuzzy logic based on hybrid features not only decreases the complexity, but it also gives enhanced reliability of fusion results. A comparison is also done with previous image fusion techniques and the results shows best performance in different set of images.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflict of interest to report regarding the present study.

References

  1. A. B. Siddiqui, M. A. Jaffar, A. Hussain and A. M. Mirza, “Block-based pixel level multi-focus image fusion using particle swarm optimization,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 7, pp. 3583–3596, 201
  2. M. Gouiffès, B. Planes and C. Jacquemin, “HTRI: High time range imaging,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 361–372, 2013.
  3. G. Cui, H. Feng, Z. Xu, Q. Li and Y. Chen, “Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition,” Optics Communication, vol. 341, no. 1, pp. 199–209, 2015.
  4. V. P. S. Naidu and J. R. Raol, “Pixel-level image fusion using wavelets and principal component analysis,” Defence Science Journal, vol. 58, no. 3, pp. 338–352, 2008.
  5. H. Li, B. S. Manjunath and S. K. Mitra, “Multisensor image fusion using the wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 199
  6. Y. Zheng, E. A. Essock and B. C. Hansen, “An advanced image fusion algorithm based on wavelet transform: incorporation with PCA and morphological processing,” in Image Processing: Algorithms and Systems III. Vol. 5298, pp. 177–187, 2004.
  7. H. J. A. M. Heijmans and J. Goutsias, “Nonlinear multiresolution signal decomposition schemes. II. Morphological wavelets,” IEEE Transactions on Image Processing, vol. 9, no. 11, pp. 1897–1913, 2000.
  8. X. Zhang, J. Han and P. Liu, “Restoration and fusion optimization scheme of multifocus image using genetic search strategies,” Optica Applicata, vol. 35, no. 4, pp. 927–942, 2005.
  9. S. Li, X. Kang and J. Hu, “Image fusion with guided filtering,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2864–2875, 2013.
  10. B. K. S. Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” Signal, Image and Video Processing, vol. 7, no. 6, pp. 1125–1143, 2013.
  11. M. B. A. Haghighat, A. Aghagolzadeh and H. Seyedarabi, “Multi-focus image fusion for visual sensor networks in DCT domain,” Computers and Electrical Engineering, vol. 37, no. 5, pp. 789–797, 20
  12. Y. Liu and Z. Wang, “Multi-focus image fusion based on wavelet transform and adaptive block,” Journal of Image and Graphics, vol. 18, no. 11, pp. 1435–1444, 2013.
  13. M. B. A. Haghighat, A. Aghagolzadeh and H. Seyedarabi, “Real-time fusion of multi-focus images for visual sensor networks,” in 6th Iranian Conf. on Machine Vision and Image Processing, Isfahan, Iran, pp. 1–6, 2011.
  14. G. Pajares and J. M. De La Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognition, vol. 37, no. 9, pp. 1855–1872, 2004.
  15. J. Tian, L. Chen, L. Ma and W. Yu, “Multi-focus image fusion using a bilateral gradient-based sharpness criterion,” Optics Communications, vol. 284, no. 1, pp. 80–87, 2011.
  16. I. De and B. Chanda, “A simple and efficient algorithm for multifocus image fusion using morphological wavelets,” Signal Processing, vol. 86, no. 5, pp. 924–936, 2006.
  17. R. Achanta, S. Hemami, F. Estrada and S. Susstrunk, “Frequency-tuned salient region detection,” in 2009 IEEE Conf. on Computer Vision and Pattern Recognition, Miami, FL, USA, pp. 1597–1604, 2009.
  18. D. P. Bavirisetti and R. Dhuli, “Multi-focus image fusion using multi-scale image decomposition and saliency detection,” Ain Shams Engineering Journal, vol. 9, no. 4, pp. 1103–1117, 20
  19. B. K. S. Kumar, “Image fusion based on pixel significance using cross bilateral filter,” Signal Image Video Process, vol. 9, no. 5, pp. 1193–1204, 2015.
  20. A. Toet, “Image fusion by a ratio of low-pass pyramid,” Pattern Recognition Letters, vol. 9, no. 4, pp. 245–253, 1989.
  21. J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proc. of ICNN’95-Int. Conf. on Neural Networks, vol. 4, pp. 1942–1948, 1995.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.