iconOpen Access

ARTICLE

crossmark

Image Fusion Based on NSCT and Sparse Representation for Remote Sensing Data

N. A. Lawrance*, T. S. Shiny Angel

Department of Computational Intelligence, School of Computing, College of Engineering and Technology, SRM Institute of Science and Technology, Chengalpattu, 603203, India

* Corresponding Author: N. A. Lawrance. Email: email

Computer Systems Science and Engineering 2023, 46(3), 3439-3455. https://doi.org/10.32604/csse.2023.030311

Abstract

The practice of integrating images from two or more sensors collected from the same area or object is known as image fusion. The goal is to extract more spatial and spectral information from the resulting fused image than from the component images. The images must be fused to improve the spatial and spectral quality of both panchromatic and multispectral images. This study provides a novel picture fusion technique that employs L0 smoothening Filter, Non-subsampled Contour let Transform (NSCT) and Sparse Representation (SR) followed by the Max absolute rule (MAR). The fusion approach is as follows: first, the multispectral and panchromatic images are divided into lower and higher frequency components using the L0 smoothing filter. Then comes the fusion process, which uses an approach that combines NSCT and SR to fuse low frequency components. Similarly, the Max-absolute fusion rule is used to merge high frequency components. Finally, the final image is obtained through the disintegration of fused low and high frequency data. In terms of correlation coefficient, Entropy, spatial frequency, and fusion mutual information, our method outperforms other methods in terms of image quality enhancement and visual evaluation.

Keywords


1  Introduction

Distinct cameras are used to acquire different views of the same region from a satellite. The most frequent forms of images captured by cameras are panchromatic and multispectral images. The entire intensity falling on each pixel is captured as an image, which is called a panchromatic image because it is panchromatic in all small bands of visible spectrum, including Red Green Blue (RGB) and infrared. Electromagnetic bandwidth collected by the sensor is referred to as spectral resolution. Sensors on satellites may detect wavelengths from three or more bands [1]. The smallest object that could be resolved on the ground is referred to as spatial resolution. Spatial resolution refers to the size described by a single pixel [2].

Panchromatic and multispectral images, respectively give high spatial and spectral information. Multispectral images are considered to have limited spatial resolution even with precise color data, but panchromatic images are grey scale images with higher spatial resolution. Both spectral and spatial information are necessary to gain more information. The fused image, regardless of individual panchromatic and multispectral photos should provide additional information about the satellite image. Image fusions can be classified into four categories: pixel level, object level, feature level, and decision level [3].

The level of fusion will be determined by the intended application of the fused image [4]. Many fusion approaches have been introduced for combining Multispectral (MS) and Panchromatic (PAN) pictures. Popular fusion techniques include high pass filtering, principle component analysis (PCA), and intensity- Hue-Saturation (HIS).

The topic of research is developing fusion algorithms for IKONOs and Landsat -7 ETM + satellite Data. The PAN and MS pictures are fused using a multi-scale edge-preserving decomposition with L0 smoothing, followed by NSCT and SR models. The suggested method is inspired by determining the error distribution in multispectral images [5]. The obtained experimental results are effective and efficient in resolving existing image processing challenges. In terms of diverse datasets and sensors such as IKONOS and Landsat-7 ETM +, the conducted experiments indicate a comparison of our methodologies with some other current techniques. Furthermore in the result section, a few arguments on parameter selections are discussed [6]. Finally the imaging and quantitative findings demonstrate the superiority of our approach. Table 1 describes all the abbreviations that are used in this research work.

images

2  Proposed Models

2.1 Smoothening Filter

Sharpening filters are used to emphasize intensity shifts. Sharpening is used in a variety of applications from electronic printing to medical imaging to industrial inspections. First order derivative-Gradient Operator, Second order derivative-Laplacian Operator, Unsharp Masking, and Highboost Filtering are examples of sharpening filters.

A L0 gradient minimization has been given here for a broad input signal. Consider Sig as the filter’s input signal and F as the filter’s output. The gradient of the output F is denoted by F. The following equation can be used to illustrate the L0 gradient minimization formula

L0=minF||FSig||2+λ||F||0 (1)

where L2 norm is denoted by ||.||, and L0 norm denoted by ||.||_0, and the level of sharpness of the final signal is controlled by the parameter. Larger the λ coarser result with less gradient. Modified form of Eq. (1) is

L0=minFk=1M[||FkSigk||2+λ/2jNk||FkFj||0] (2)

where length of the signal is denoted by M and kth neighbor set is denoted by Nk. The neighboring relationship between Fk and Fj is counted twice so, λ is divided by 2. For each case like 1Dim, 2Dim, 3Dim the neighboring set Nk is defined [7]

Nk={{k1,  k+1}1Dim{connected pixelsfour}2Dim{all neighbor faces of the k th  face}3Dim  (3)

2.2 Non-Subsampled Contourlet Transform (NSCT)

The spatial structures of images are captured along smooth contours, and any number of directions is allowed in each scale, making NSCT flexible and efficient for representation of 2-D objects. Nonsubsampled Pyramid Structure (NSP) and Nonsubsampled Directional Filter Bank (NSDFB) in conjunction create NSCT, as illustrated in Figs. 1a and 1b. When applying directions to a coarser field of pyramids, extra care must be used when constructing the NSCT. Because the NSDFB has a tree-like structure, aliasing of upper and lower frequencies can be a problem in the upper stages of the pyramid [8]. The pass band region of directional filters is graded as “excellent” or “poor” and the aliasing problem is depicted in Figs. 2a and 2b. When no up sampling method is used, the coarser scale’s high pass channel is filtered with a bad section of a directional filter. Filtering is done in a large chunk of the directional filter when up sampling occurs. As a result, there is a lot of aliasing and a lot of lost directional resolution. The Following Figs. 1 and 2 describe how the Transform is implemented [9].

images

Figure 1: Non-subsampled contourlet transform. (a) NSCT implement based on NSFB structure. (b) Flawless frequency partitioning getting with the proposed structure

images

Figure 2: Necessity for upsampling in the NSCT. (a) Without upsampling, filtering done in the “bad” portion of the directional filter. (b) With upsampling, filtering finished in the “good” region

2.3 New Adaptive Structured Dictionary Learning

In sparse representation dictionary learning plays a very important role. In recent years for adaptive dictionary learning (DL) K-Singular Value Decomposition (K-SVD) is widely used. Structures of various images can be represented in better form by Dictionary learned with K-SVD when compared to traditional technique [10]. Random sampling from collection of good quality images with fixed window size N×N implemented for getting sampled patches {patchi}i=1TN . Total number of sampled patches is represented by TN. Then performs the rearrangement of patches to a column vector and every value is deducted by its own mean to make patch intensity around zero. A threshold of intensity variance is set thus it ensures the patches with high edge information and to remove the smooth patches [11]. The training patches denoted by {PTi}i=1MN formula for dictionary is as follows.

minDic,τii=0MN||τi||0s.t.||PTiDicτi||<ϵ, i ε{1, .,MN} (4)

where tolerance factor is denoted by ε > 0, sparse coefficients denoted by {τi}i=1MN = 1 and dictionary to be learned is denoted by D.

2.4 K-Singular Value Decomposition

The initial over-complete dictionary Dic0 , total iterations n is accepted by K-SVD and training samples set is arranged into matrix columns Z [12]. The aim of this algorithm is to improve the dictionary iteratively to obtain the sparse representation of the samples in Z by solving the optimization problem

minDic,τ||ZDicτ||V2    s.t.  l   ||ϑ_l||0N  (5)

The algorithm of K-SVD consists of two steps that together to make the iteration of the algorithm. Two steps as per below:

     i)  Based on current dictionary estimation, the samples in Z are sparse coded, for generating the matrix of sparse representations τ

    ii)  The atoms of dictionary are updated according to current sparse representation.

The implementation of sparse coding is achieved by using Orthogonal Matching Pursuit Alogorithm (OMP); the dictionary updating is done single atom at a time, which optimizes the target function independently for atom while leaving the rest unchanged these two things is shown in algorithm 1, in that line 5 is the sparse coding and line 6–13 is the dictionary update.

The key advancement in the K-SVD algorithm is the atom update step, which is accomplished while preserving the restriction in Eq. (5). To do this, the update step only uses the samples in Z which use the current atom in sparse representations. Allowing P to denote the signals indices in Z that use the mth atom, the update is achieved by optimizing the target function

||ZPDicτP||V2  (6)

For both the atom and the related coefficient row in  τP . The result is a simple task of approximation to

images

rank-1 given by

{D_, G_} :=ArgminD_,G_||EMD_G_s||V2    s.t.  ||D_||2=1   m  (7)

where EM=ZPlmD_lτl,P denotes the without mth atom error matrix, D_ denotes the updated atom and G_S denotes the new coefficients row in τP . Thus the optimization problem can be easily solved directly by SVD decomposition or utilizing more effective numeric power method [13].

2.5 Orthogonal Matching Pursuit Algorithm

The aim of the OMP algorithm is the providing of approximate solution to following two problems; one is the sparsity constrained sparse coding problem expressed by

ϑ^_=Argminϑ||z_ Dicϑ_||22    s.t.    ||ϑ_||0N (8)

second one is error-constrained sparse coding problem, expressed by

ϑ^_=Argmin||ϑ_0||ϑ_  s.t. ||z_ Dicϑ_||22    ϵ  (9)

For simple form, we presume that Dic columns are uniform or normalized to unit y2length (although this constraint can be eliminated easily).

The greedy OMP algorithm chooses the atom with the high correlation to the actual residual at each stage. Once the atom has been picked, the signal is applied orthogonally to the period of the chosen atoms, the residual is reputed and the cycle continues [14]. Above statement is shown in algorithm 2. In algorithm line 5 is the step of greedy selection & line 7 is the step of orthogonalization.

images

The algorithm OMP is a greedy algorithm where attempts are made to locate a sparse representation of a sample provided a specific dictionary. The algorithm tries to locate the strongest base vectors (atoms) iteratively, such that the representation error is decreased in each iteration [15]. It is done by choosing the atom from the dictionary that has the largest absolute representation on the vector of error. This basically means that we pick the atom which contributes the most information and hence minimizes the reconstruction error as much as possible. The code vector w is derived from the sample vector z and dictionary Dic [16]. It is formulated in 3 steps as follow as:

i)   Choose the atom that has full residual projection

ii)   Update wn=Argminwn||z Dic wn||2

iii)   Residual updating rn=zzn

2.6 Sparse Representation

Few atoms of a redundant dictionary form a linear combination from which representation of many signals in assumption and that is relied by the sparse model [17]. Atoms formed by columns of the Matrix M with M-prototype and n number of signals gives the redundant dictionary DicRn x M (n < M), y=Dicx or yDicx express a signaly  yRn . In Dictionary Dic the coefficients that represents signal y is present in the vector xRM . The vector x is may not be unique due to redundant dictionary with the fewest non-zero components a solution to vector x is determined by the proposed SR model. By assuming negligible noise exactly this can be achieved mathematically or by solving the optimization problem considering inexact noise [18].

minxn||xn||0 s.t. y=Dicx,  (10)

minxn||xn||0 s.t. ||yDicx||22τ. (11)

SR and compressed sensing based on recent developments minimization problem in L0 that is non-convex in (1) and (2) comfortable to acquire the convex l1-minimization problems in

minxn||xn||1 s.t. y=Dicx,  (12)

minxn||xn||1 s.t. ||yDicx||22τ. (13)

Linear programming methods give the Solutions [19].

3  Proposed Method

The suggested method uses image fusion with panchromatic and multispectral images from IKONOS and Quick Bird. This unique fusion method yields improved spectral and spatial information. As previously indicated, the fusion procedure used the L0 smoothing filter, NSCT, and SR representation. The final image provides more information about the satellite image than the multispectral and panchromatic images combined. Fig. 3 depicts the entire framework of the above-mentioned technique.

i)   The source image is deconstructed in multi-classes utilising a multi-scale decomposition model for efficient extraction of required high-resolution data from PAN and MS images. By using this procedure, specific information is kept in a unique way, and the quality of fused images is improved. A multi-scale decomposition is proposed based on the L0 smoothening filter.

ii)   For low-frequency data, the NSCT-SR based image fusion technique may be used to preserve the structure and detail information of each channel of the MS and PAN images, while for high-frequency data, the max-absolute fusion rule can be used to remove redundant information. Each channel of the MS image, such as R, G, and B, is separated, and then NSCT-SR/MAR is applied to each channel of the MS picture and PAN image.

iii)   For natural appearance of the fused images, the lower frequency data and high frequency data fused in previous process is fused further.

iv)   Also, we use the different existing techniques for image fusion namely guided, IHS fusion, Principal component analysis, Brovey (BT) and NSCT Transforms have been evaluated for comparison.

v)   The fusion process in statistically evaluated by four commonly used parameters like correlation coefficient (CC), entropy/noise, Spatial Frequency (SF) and Fusion mutual information (FMI).

images

Figure 3: Iterative process of K-SVD algorithm

The proposed system Fig. 4 presents a new technique to remote sensing image fusion which is done by different modules that are source image decomposition, image fusion including LF data fusion and HF data fusion and final fusion. Detail explanation of proposed system is described via modules as per below:

images

Figure 4: Complete frame work of proposed image fusion method

3.1 Source Image Decomposition

Images multiple features at different scales are obtained by decomposing the source image. In existing decomposition models the fused image always suffers from artifacts since they mostly adapt linear filters. Here L0 smoothening filter is adapted for decomposing the source image that reduces the artifacts. Here low frequency and series of high frequency data are obtained by decomposing of image. τ is the parameter used for decomposing the source image. Below Fig. 5 represents the decomposition process. Iorg is the source image a smoothed image is acquired from Eq. (2) that is considered to be Low Frequency (LF) data and when this smoothed image is differentiated with original image Iorg the High Frequency (HF) data is obtained.

images

Figure 5: Decomposition process

S = L0Smooth (Iorg, λ , τ ) performs L0 gradient smoothening of input image Iorg, with smoothness parameters weight lamda and rate kappa. The degree of smooth is controlled by the Smoothing parameter lamda. The parameter lies within [1e-3, 1e-1]. The rate is controlled by kappa. As small as the kappa the more the iterations and gives sharper edges. Then, according to smooth image Low Frequency Data (LFD), we can get the high frequency data by

 D=Iorg  Ilow (14)

where Iorg is input image and Ilow is smoothed image (LFD) [19,20].

3.2 Image Data Fusion

The source images such as MS and PAN images are decomposed into low frequency data (LFD) and high frequency data (HFD) using L0 smoothening filter. Then further LF data of both the images and HF data of both the images are fused together.

3.2.1 Low Frequency Data Fusion

Complete demonstration of LF data fusion is given in Fig. 6 Initially the LF data is decomposed to sub low frequency data (SLD) and Sub High frequency Data (SHD) images using NSCT. The SLD regions of both the images are adopted to sparse representation trained by dictionary learning. Then the sparse coefficients of SLD data are fused together with max-L1 rule. Similarly the SHD are combined using max-absolute rule. The complete steps follow as,

i)   NSCT decomposition is used for decomposition of LF to SLD {sublowA, sublowB} and SHD {subhighA, subhigB} .

ii)   Then used the sliding window technique is used for splitting sublowA and sublowB into patches with size N×N . {patchAi}i=1M and {patchBi}i=1M are obtained from sublowA and sublowB with same position i, where total patches are represented by M.

iii)   The patches are rearranged to column vector for every position i. Then as follows in eq

CAi=CAimAi   (15)

CBi=CBimBi (16)

Nx1 column vector is denoted by {mimgi}img=A,B . Mean value of the patch composes all elements of the vector.

vi)   The sparse coefficient vectors are calculated for the above Eqs. (10) and (11) with OMP (orthogonal Matching Pursuit) algorithm.

v)   The sparse coefficient vectorsare fused using max-L1 rule

αi={αAi,  if ||αAi||1>||αBi||1αBi,  if ||αAi||1<||αBi||1  (17)

And the result of the final fusion model is

vi)   

SubCLowi={DiαAi+mAi,  if αi=αAiDiαBi+mBi,  ifαi=αBi (18)

vii)   For all patches iteration is performed from (3) to (5) thus fused LF data patch is obtained {CLowi}i=1M . Each of CLowi is reshaped to N × N into PLowi. The LF data result is denoted by PLow . Finally all PLowi is placed in its original position PLow .

viii)   The SHF data are fused together with the help of max-absolute rule with a 3×3 as the size of window depending upon the consistency verification method.

ix)   Finally, Applying inverse NSCT for getting output LF fused data. Here, LF Data–Low Frequency Data; A–MS Image; B–PAN Image; SLF D–Sub Low frequency Data; SHF D-Sub High frequency Data.

images

Figure 6: Low frequency data framework

3.2.2 High Frequency Data Fusion

Fig. 7 describes HF data Fusion using MAx aboslute Rule. High frequency data obtained from both pan chromatic and multispectral images are fused together using Maximum Absolute rule here the coefficients from both the images are selected for the final image with reference of the maximum value on comparison.

images

Figure 7: HF data fusion using max-absolute rule

HF data coefficients are fused using max absolute rule. It performs selecting maximum in pixel by pixel fusion rule. The high frequency data adopted by the two source images are represented by High Frequency Data Multispectral (HFDM) and High frequency Data Panchromatic (HFDP) and that is formed by using L0 smoothing filter.

3.2.3 Final Fusion

At final step of fusion high frequency data fused image by max absolute rule and low frequency fused data from combined NSCT and SR are taken for final fusion of low frequency and high frequency data and addition operation is performed between both the images, which gives the final fused image with higher information on spectral and spatial data [21]. Fig. 8 explains the how the Final fusion takes place.

Fin=LF+i=1kHFi     (19)

images

Figure 8: Final fusion of image

4  Results and Discussion

4.1 Datasets

The two datasets used for above process is Landsat-7ETM + and IKONOS. Every images are captured by different satellites. One sample of each image has been used. Table 2 explains the parameters that are set on the datasets for the experiments.

images

4.1.1 Landsat-7 ETM + Dataset

Landsat-7 ETM + satellite dataset covered the area of Girona, Spain, taken in 2008. Landsat-7 obtained the PAN image has the spatial resolution of 15 m and MS image has the spatial resolution of 30 m. Landsat-7 PAN images are acquired in 0.52–0.90 μm spectral range. Landsat-7 MS images are acquired in 0.63–0.69 μm (Red), 0.52–0.60 μm (Green), 0.45–0.52 μm (Blue), 0.76–0.90 μm (Near-IR), 1.55–1.75 μm (Mid-IR) and 2.08–2.35 μm (Shortwave-IR) six spectral range. Subscenes of MS and PAN raw images are used in our experiment. Fig. 9 gives the Fusion results of Landsat-7 ETM + data.

images

Figure 9: Fusion results of Landsat-7 ETM + data. (a) PAN data. (b) MS data. (c) Proposed. (d) Brovey. (e) IHS. (f) Guided (GF). (g) NSCT. (h) PCA

4.1.2 IKONOS Dataset

The IKONOS PAN image has a spatial resolution of 1 m that is acquired in a 0.45 to 0.90 μm spectral range, while the MS image has a spatial resolution of 4 m that is obtained in four spectral ranges of 0.63–0.69 μm (Red), 0.52–0.60 μm (Green), 0.45–0.52 μm (Blue), while 0.76–0.90 μm (NIR) respectively. Our case study is comprised of subset images from the City of Fredericton, New Brunswick, Canada obtained by IKONOS in October 2001. The data collection consists of images of urban area and consists of different features such as lane, house, parking, fruit, grass etc. Fig. 10 gives the Fusion results of IKONOS data.

images

Figure 10: Fusion results of IKONOS data. (a) PAN data. (b) MS data. (c) Proposed. (d) Brovey. (e) IHS. (f) GF. (g) NSCT. (h) PCA

4.2 Quality Evaluation of Fused Image

The fused image quality was evaluated by some statistical parameters such as correlation coefficient (CC), Standard Diviation (SD), entropy/noise, spatial frequency (SF) and fusion mutual information (FMI) have been used.

i. Correlation Coefficient: The correlation coefficient is defined as measurement of the similarity between two images. Thus calculation of similarity between fused image and reference image is given as following equation

CC=i1mj1n[RI(i, j)RI¯][FI(i, j)FI¯]{[i1mj1nRI(i, j)RI¯]2}{[i1mj1nFI(i, j)FI¯]} (20)

ii. Entropy: The entropy is defined as measure of the image information content. The larger value obtained mean high information in fused image.

E=0255F(i)log2 F(i) (21)

iii. Spatial Frequency: Row and column frequency of the fused image is calculated that defines the spatial frequency that gives the quality of the fused image.

    SF=(RF2+CF2)1/2 (22)

iv. Fusion Mutual Information: The degree of dependence between source and fused images is computed using FMI parameter. The quality of fused image will be better if the Value of FMI is higher.

FMI=MIIp;If+MIIm;If (23)

Hence full scale parameters (Entropy, spatial frequency) are obtained with resultant image alone and degraded scale parameters (CC, FMI) are obtained by using a reference image along with the resultant image. Tables 3 and 4 describe Quantitative assessment of the full scale and degraded scale experimental results of Landsat-7 ETM +data and IKONOS data.

images

images

5  Conclusion

In this research, we proposed a novel method for satellite image fusion that fuses Multispectral and Panchromatic image to obtain high information on spectral and spatial details of the satellite image. The method used for image fusion includes L0 smoothening filter that gives a better decomposition of low and high frequency data for satellite image. And for fusion of low frequency components obtained from L0 Filter is fused by combination of NSCT and SR and Max absolute rule has been applied for high frequency used give an optimum fusion of images. And the result has been compared with Brovey, HIS, GUIDED, NSCT and PCA by means of CC, Entropy, SF and FMI which shows in Fig. 11 that our proposed method gives a better performance in satellite image fusion.

images

Figure 11: Quality evaluation of different satellite data fusion results with proposed model. (a) Landsat-7 ETM + dataset Data (b) IKONOS Data

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. S. Zenzo, “A note on the gradient of a multi-image,” Computer Vision Graphics Image Processing, vol. 33, pp. 116–125, 1986.
  2. Cumani, “Edge detection in multispectral images,” CVGIP: Graphical Models and Image Processing, vol. 53, pp. 40–51, 1991.
  3. G. Sapiro and L. Ringach, “Anisotropic diffusion of multivalued images with applications to color filtering,” IEEE Transactions. Image Processing, vol. 5, no. 11, pp. 1582–1586, 1996.
  4. S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 7, pp. 710–732, 1992.
  5. C. Pohl and J. Van, “Multisensor image fusion in remote sensing: Concepts, methods and applications,” International Journal of Remote Sensing, vol. 19, pp. 823–854, 1998.
  6. H. Manjunath and S. Mitra, “Multisensor image fusion using wavelet transform,” Graphical Models and Image Processing, vol. 57, no. 3, pp. 235–245, 1995.
  7. T. Wilson, S. Rogers and L. Meyers, “Perceptual-based hyper spectral image fusion using multiresolution analysis,” Optical Engineering, vol. 34, no. 11, pp. 3154–3164, 1995.
  8. T. Wilson, S. Rogers and L. Meyers, “Perceptual-based image fusion for hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 35, no. 4, pp. 1007–1017, 1997.
  9. T. Pu and G. Ni, “Contrast-based image fusion using the discrete wavelet transform,” Optical Engineering, vol. 39, no. 8, pp. 2075–2082, 2000.
  10. D. Yocky, “Image merging and data fusion by means of the discrete two dimensional wavelet transform,” Journal of the Optical Society of America A, vol. 12, no. 9, pp. 1834–1841, 1995.
  11. B. Garguet, “The use of multiresolution analysis and wavelets transform for merging spot panchromatic and multispectral image data,” Photogrammetric Engineering & Remote Sensing, vol. 62, no. 9, pp. 1057–1066, 1996.
  12. B. Garguet, “Wavemerg: A multiresolution software for merging spot panchromatic and spot multispectral data,” Environmental Modelling and Software, vol. 12, no. 1, pp. 85–92, 1997.
  13. D. Yocky, “Multiresolution wavelet decomposition image merger of land sat the matic mapper and spot panchromatic data,” Photogrammetric Engineering & Remote Sensing, vol. 62, pp. 1067–1074, 1996.
  14. T. Ranchin and L. Wald, “Fusion of high spatial and spectral resolution images: The arsis concept and its implementation,” Photogrammetric Engineering & Remote Sensing, vol. 66, no. 1, pp. 49–61, 2000.
  15. J. Marcello, A. Medina and F. Eugenio, “Evaluation of spatial and spectral effectiveness of pixel-level fusion techniques,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 3, pp. 432–436, 2013.
  16. D. Fasbender, J. Radoux and P. Bogaert, “Bayesian data fusion for adaptable image pan sharpening,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 6, pp. 1847–1857, 2008.
  17. X. Zhu and R. Bamler, “A sparse image fusion algorithm with application to pan-sharpening,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 5, pp. 2827–2836, 2013.
  18. L. Alparone and B. Aiazzi, “Multispectral and panchromatic data fusion assessment without reference,” Photogrammetric Engineering & Remote Sensing, vol. 74, no. 2, pp. 193–200, 2008.
  19. A. Siva Kumar, S. Godfrey and R. Ramesh, “Efficient sensitivity orient block chain encryption for improved data security in cloud,” Concurrent Engineering, vol. 29, no. 3, pp. 249–257, 2021.
  20. D. Kaur and Y. Kaur, “Various image segmentation techniques: A review,” International Journal of Computer Science and Mobile Computing, vol. 3, no. 5, pp. 809–814, 2014.
  21. T. Li, H. Li, S. Zhong, Y. Kang, Y. Zhang et al., “Knowledge graph representation reasoning for recommendation system,” Journal of New Media, vol. 2, no. 1, pp. 21–30, 2020.

Cite This Article

N. A. Lawrance and T. S. Shiny Angel, "Image fusion based on nsct and sparse representation for remote sensing data," Computer Systems Science and Engineering, vol. 46, no.3, pp. 3439–3455, 2023. https://doi.org/10.32604/csse.2023.030311


cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 791

    View

  • 430

    Download

  • 0

    Like

Share Link