[BACK]
Computer Systems Science & Engineering
DOI:10.32604/csse.2023.027187
images
Article

Visual Enhancement of Underwater Images Using Transmission Estimation and Multi-Scale Fusion

R. Vijay Anandh1,* and S. Rukmani Devi2

1Department of Electronics and Communication Engineering, RMK College of Engineering and Technology, Tiruvallur, Tamilnadu, 601206, India
2Department of Electronics and Communication Engineering, RMD Engineering College, Gummidipundi, Tamilnadu, 601206, India
*Corresponding Author: R. Vijay Anandh. Email: vijayanandhphd@gmail.com
Received: 12 January 2022; Accepted: 10 March 2022

Abstract: The demand for the exploration of ocean resources is increasing exponentially. Underwater image data plays a significant role in many research areas. Despite this, the visual quality of underwater images is degraded because of two main factors namely, backscattering and attenuation. Therefore, visual enhancement has become an essential process to recover the required data from the images. Many algorithms had been proposed in a decade for improving the quality of images. This paper aims to propose a single image enhancement technique without the use of any external datasets. For that, the degraded images are subjected to two main processes namely, color correction and image fusion. Initially, veiling light and transmission light is estimated to find the color required for correction. Veiling light refers to unwanted light, whereas transmission light refers to the required light for color correction. These estimated outputs are applied in the scene recovery equation. The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques. The resultants are divided into three weight maps namely, luminance, saliency, chromaticity and fused using the Laplacian pyramid. The results obtained are graphically compared with their input data using RGB Histogram plot. Finally, image quality is measured and tabulated using underwater image quality measures.

Keywords: Underwater image; backscattering; attenuation; image fusion; veiling light; white balance; laplacian pyramid

1  Introduction

Underwater exploration has become a demanding field today. Research are undertaken in resource exploration and extraction as the ocean contains an abundant amount of many resources that are very essential for mankind. State of those resources is essential and this is where visual data of those resources plays a predominant role. But there is degradation in getting those data. The two main factors in the degradation of the visual quality of underwater images are Backscattering and Attenuation.

Backscattering [1] occurs due to the reflection of light particles back in the same direction. These internal light particles are stood between the camera lens and the object. The intensity of light particles mostly depends on turbidity of the water. More the turbidity greater will be the effect of backscattering underwater. Not only light particles, even excessive content of sand and plankton can cause backscattering. Fig. 1 describes the underwater imaging model where the flow of backscattering is shown.

images

Figure 1: Underwater imaging model

The word attenuation is otherwise called as extinction. In physics, attenuation is defined as the loss of flux intensity of the medium. The intensity of atmospheric light decreases as depth increases due to absorption of light. This makes the color look more bluish as the other colors are absorbed. Some of the passive factors that cause attenuation are scattering, noise etc. Color spectrum model of Attenuation of light is shown in Fig. 2.

images

Figure 2: Attenuation of light

The paper is organized as follows, Section 2 deals with related works, Section 3 gives a detailed explanation about the proposed method. In Section 4 results are displayed. In Section 5 paper is concluded with the impact. Finally in Section 6 future scope of the proposed work is described.

2  Related Works

Many visual enhancement techniques are developed every day. Based on the requirements, each technique has its own specificity. In this section, some important techniques are reviewed, which are related to the proposed approach.

2.1 Histogram Equalization

Histogram Equalization is a method to improve the visual quality of underwater images taken in distorted light conditions. The main objective of histogram equalization is to enhance the contrast of the degraded image. Gwanggil [2] developed a Histogram Equalization method for color images. In this approach, RGB image is taken are input. This input is first converted to HSV (Hue, Saturation and Value) color spaces and split into 3 channels. Then Histogram equalization is applied to S and V channels. Finally, Channels are merged and converted to RGB images as shown in Fig. 3.

images

Figure 3: Histogram equalization

This approach is not applicable when flat histogram is needed. To overcome this Adaptive Histogram Equalization is developed. Several histograms are computed for each part of the image. The main theme of Adaptive Histogram equalization is to convert each pixel with transformation function derived from near closest region. The main problem of AHE is over amplification of noise. To overcome this Rajesh Kumar et al. [3] proposed CLAHE (Contrast Limited Adaptive Histogram Equalization).

2.2 Dark Channel Prior

This approach was initially proposed for atmospheric hazy images. The main objective of this algorithm is to estimate the transmission of the input image. Using this, the hazy part of the image is removed. Drews et al. [4] developed underwater dark channel prior (UDCP) to estimate the transmission in underwater efficiently when compared to the normal DCP algorithm.

The above Fig. 4 shows the flow diagram of the dark channel prior analysis. Galdran et al. [5] perceived that red channel intensity decreases with an increase in distance of the camera. To overcome this Red Channel Prior is developed. This method is applicable for low wavelengths underwater. Simon et al. [6] proposed a Hierarchy based model where haze–opaque regions are identified. The main objective of this model is the estimation of backscatter.

images

Figure 4: Dark channel prior

2.3 Hardware

There are some methods that involve specialized hardware for the visual enhancement of underwater images. Considering an example, the divergent beams, underwater LIDAR imaging system [7] possess an optical sensing technique to capture underwater images with more turbidity. But the investment needed to apply this process is very expensive and time-consuming. Even after the establishment the device should be monitored and cleaned which is not practical. So, this method is not applicable for constant data retrieval of underwater images.

3  Proposed Method

In this proposed method underwater images are visually enhanced with two important steps. They are Color Restoration and Fusion.

3.1 Color Restoration

The main objective of color restoration process is to solve the problem of attenuation in underwater images. As shown in Fig. 5 Color with lower intensity (most probably red) is recovered in three main steps they are Veiling Light Estimation, Transmission Estimation and Scene Recovery.

1) Veiling Light Estimation: Veiling light or Background Light [8] is a very important factor for many dehazing algorithms. It is defined as the atmospheric light which is scattering from particles in underwater in a hazy are into the line of sight of imaging. This results in image degradation leading to the low visual quality of underwater images. Firstly bright regions are estimated by developing a histogram in YCbCr color space in Luma channel. Equivalent pixels are found out for refinement process. Finally, Veiling Light is estimated by taking the average of the remaining pixels [9]. V = (xv, c), where xv ∈ ℝ2 and refers to the location of the veiling light and c ∈ ℝ3 which determines the RGB value of xv and V refers to the veiling light.

2) Transmission Estimation: The main objective of transmission estimation is to prevent oversaturation and artefacts in background regions. The oversaturation problem arises when image values range beyond the values, 0 and 1. The reason for oversaturation is the wrong estimation of bright regions. This results in incorrect estimation transmission values. Artefacts are defined as the features obtained in an image but are not originally present in a captured object. They normally in background regions due to low transmission values obtained from an image.

t(x)={tLB(x)DM(I(x))DM+σMtB(x)DM(I(x))DMmax+σMα(x).tLB(x)+(1α(x)).t(x)otherwise (1)

where tLB(x) signifies transmission in lower bound, tB(x) refers to compliment of bound transmission, DM refers to the average of Mahalanobis distance of the veiling-light pixels and DMmax refers to the maximum of Mahalanobis Distance of the veiling light pixels.

3) Scene Recovery: The values obtained in veiling light estimation and transmission estimation are applied in image formation model.

s(x)=I(x)Vt(x)+V (2)

images

Figure 5: Color restoration

3.2 Image Fusion

An error of Backscattering is recovered using Fusion Process [10]. Image obtained from restoration process is taken as an input image. Two versions of the input image is applied to White Balance and Contrast Enhancement. Then the results of the process are split into three weight maps namely, Luminance, Saliency and Chromaticity. Finally these images are fused using Laplacian Fusion. The flow diagram of image fusion process is shown in Fig. 6.

images

Figure 6: Image fusion

1) White Balance: The main objective of white balance algorithm is color casting [11]. This is achieved by selective absorption of colors with depth. The primary step of white balancing algorithm is improving the image aspect by eliminating unnatural color castings formed due to illumination properties. A simple white balance algorithm is used to improve the color constancy. Image obtained from restoration process is taken as input image. First version of the input image is applied for the white balance algorithm. Mean luminance is identified by converting RGB image into a gray image. Red, Blue and Green channels are individually extracted and mean of those channels is found out. Finally mean of those channels is made the same and combined to single RGB image.

2) Contrast Enhancement: The main theme of contrast is to differentiate the required objects present in the image. In this process, the regions in low contrast are enhanced. Low contrast occurs in various ways such as; airlight influence, attenuation, turbidity, backscattering etc. Intensity of these factors increases linearly with distance of the object from the water surface as well as the camera. The expression proposed by Ancuti et al. [12] for enhancing the contrast is given by [13]:

I2(x)=γ(I(x)I) (3)

where I2(x) refers to the second version of the restored input image taken in pixels. γ refer to the factor of increasing the luminance linearly.

a) 3) Weight maps: The main disadvantage of enhancement operations mentioned above is that the same operation (process) is applied for all the regions of the image. This results in change of non–spatial regions of the image. To overcome this weight maps are introduced. The main objective of weight maps is to identify the spatial regions of the degraded image. Three weight maps named luminance, saliency and chromaticity are used to identify those regions.

Luminance weight map: The main objective of the luminance weight map is to distinguish and assign higher values for visible regions and lower values for nonvisible regions. To attain this, the visibility of each pixel is measured. RGB color channels are required for applying this weight map. The following expression [7] is applied for every pixel of the image:

WLk=1/3[(RkLk)2+(GkLk)2+(BkLk)2] (4)

where W refers to the weight map to be calculated, L refers to the luminance, RGB refers to the color channels and k refers to each pixel region.

b) Saliency weight map: The main objective of saliency is a perceptual quality measure. Perceptual quality defines the attractive parts of the image. It is also termed as visual attention. It is used to make the existence of portions more noticeable to its neighboring field. According to Ancuti et al. [12] saliency can be estimated using:

Wsk=Ikwhc(x)Ikμ (5)

where Ikμ refers to the arithmetic pixel value of the input, Ikwhc(x) refers to the blurred version of the input, s represents the saliency and W refers to its weight map.

c) Chromatic weight map: The main objective of Chromatic weight map is to operate the saturation gain of the resultant image. Saturation determines the human preference in their visual appeal. It can be determined using [12]:

Wck(x)=exp((Sk(x)Smaxk)22σ2) (6)

where Smaxk refers to the constant value depending on color space, k determines the derived inputs and constant standard deviation σ = 0.3.

4) Multi-Scale Fusion: Image Fusion is defined as the method of fusing details of two or more images into a single image. The weight maps obtained from two versions of the image are combined into one single image. This can be done by using the following expression [14]:

Rf(x)=Wk(x).Ik(x) (7)

where Wk(x) refers to the weight map values obtained with the identity k. Ik(x) refers to the input image and finally Rf(x) refers to the expression for fused image output. Artefacts can occur when this expression is directly applied. To overcome this, pyramid approach is used. The main objective of pyramid representation or multi scale representation is to infuse an image or signal in subsampling. Gaussian Pyramid [15] is utilized to achieve the multi scale image representation. In Gaussian Pyramid, the resolution of the image is reduced by breaking the pixels into smaller ones.

Gl+1=Down(GlGkernel) (8)

where l refers to the layer where down sampling needs to be done, Down refers to the down sampling process. Gkernel refers to the Gaussian kernel. The purpose of Gaussian kernel is to smoothen the lower layers of the pyramid. In this process high–frequency data is lost. To retrieve it, laplacian pyramid is utilized. This method can also be applied for image compression techniques. The normalized weights are obtained using Gaussian pyramid and convoluted with the laplacian inputs. To estimate the high–frequency information loss the following expression is applied.

Ll=GlUp(Gl+1)Gkernel (9)

where Up refers to the up sampling process, Ll refers to the laplacian result to be obtained for the particular layer l, ⊗ operator refers to the convolution process. To obtain the decomposition layers of the pyramid:

n=[log2(min(h,w))]2 (10)

Here h and w represent rows and columns of the layered image, n represents the required decompositions layers. The final expression of the fusion pyramid obtained is:

Rl(x)=kGl{Wk(x)}Ll(Ik(x)) (11)

Rf(x)=lUp(Rl(x)) (12)

4  Results

Fig. 7 refers to the input images which are required for a single visual enhancement technique. The input images are obtained at various depths and turbidity. These images are first subjected to color restoration process where the transmission is estimated and then applied to image fusion based on the estimated weight maps that are obtained the resultant output of white balance and contrast enhancement algorithms. Fig. 8 refers to the final restored image.

images

Figure 7: Distorted input images

images

Figure 8: Restored images

RGB Plot:

The main purpose of the RGB plot is to visualize the Data layers obtained from the image in two dimensional spaces. It defines the RGB color intensity by taking the brightness in x-axis and number of iterations in y-axis. By comparing the plots of both input and output images, The red channel intensity is shaded by blue and green channels causing image degradation as shown in Figs. 9 and 10.

images

Figure 9: RGB plot of input images

images

Figure 10: RGB plot of restored images

UIQM:

The results obtained are verified and tabulated using underwater image quality measurement [16]. Visual quality of underwater images is measured using three main parameters namely Underwater Image Colorfulness Measurement (UICM), Underwater Image Sharpness Measurement (UISM) and Underwater Image Contrast Measurement (UIConM).

1) UICM: The main purpose of Underwater Image Colorfulness measurement is to measure the colorfulness of the given image. Normally underwater images are degraded due to the low intensity of red light. So the main objective of image enhancement is color rendition. The following equation determines the colorfulness of the image.

UICM=0.0268μ2α, RG+μ2α, γB+0.1586σ2α, RG+σ2α, γB (13)

2) UISM: The main objective Underwater Image Sharpness measurement is to estimate the sharpness of the image. Sharpness helps in saving the details of the image. The following expression determines the sharpness of the image.

UISM=c=11λcEME(grayscaleedgec) (14)

EME=2klk2l=1k1k=1k2log(Imax,k,lImin,k,l) (15)

3) UIConM: Contrast is defined as the difference between bright pixels and dark pixels. Underwater Image Contrast Measurement is used to estimate the contrast of the image. The following expression determines the contrast value of the image.

UIConM=logAMEE(intensity) (16)

logAMEE=1klk2=l=1k1k=1k2log(Imax,k,lImin,k,lImax,k,lImin,k,l) (17)

The resultant obtained from the UICM, UISM and UIConM are used to calculate UIQM. Underwater Image Quality Measurement can be calculated using the following expression [17].

UIQM=0.0282UICM+0.2953UISM+3.5753UIConM (18)

Image Quality measured for input degraded image and Dehazed image are tabulated in the Tabs. 1 and 2.

images

images

5  Conclusion

Underwater images play a significant role in many research areas. Geologists require these visual data for their constant measurement of the ocean environment. In this paper, the reason for the distortion of underwater images is identified (Backscattering and Attenuation). A Survey is taken on previous traditional algorithms for single image enhancement. Based on those approaches proposed method is developed. Firstly, the reason for color restoration is identified and required transmission is estimated for restoration process. Then dehazing is done by image fusion process using the weight maps obtained in white balance and contrast enhancement process applied to the two versions of the image obtained in the restoration process. Finally, RGB plot is estimated to distinguish the color difference obtained between the input and resultant image. In future, proposed Single image visual enhancement can be improved in various ways. This approach can be extended for various depths and high turbidity levels. The results obtained can be used as a reference dataset in deep learning algorithms. The dehazing process can be modified by using the latest fusion techniques. More weights can be used in the fusion process to improve the accuracy of estimating the required data.

Funding Statement: The authors received no specific funding for this study.

Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

References

  1. H. Lu, Y. Li and L. Zhang, “Contrast enhancement for images in turbid water,” Journal Optical Society of America, vol. 32, no. 5, pp. 886–893, 2015.
  2. J. Gwanggil, “Color image enhancement by histogram equalization in heterogeneous color space,” International Journal of Multimedia and Ubiquitous Engineering, vol. 9, no. 7, pp. 309–318, 2014.
  3. R. Rajesh Kumar, G. Puran and S. Balvant, “Underwater image segmentation using clahe enhancement and thresholding,” International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 1, pp. 118–123, 2012.
  4. S. Drews, R. Paulo and J. Nascimento, “Underwater depth estimation and image restoration based on single images,” IEEE Computer Graphics and Applications, vol. 36, no. 9, pp. 24–35, 2016.
  5. A. Galdran, D. Pardo and A. Picón, “Automatic red-channel underwater image restoration,” Journal of Visual Communication and Image Representation, vol. 26, no. 10, pp. 132–145, 201
  6. E. Simon and C. Lars, “Hierarchical rank-based veiling light estimation for underwater dehazing,” in British Machine Vision Conf., Israel Institute, North America, pp. 73–81, 2015.
  7. A. Derya and T. Haifa, “Sea-thru: A method for removing water from underwater images,” in IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 832–843, 2019.
  8. J. Lu, N. Li, and S. Zhang, “Multi-scale adversarial network or underwater image restoration,” Optics and Laser Technology, vol. 110, no. 23, pp. 105–113, 2019.
  9. D. Berman, T. Treibitz and S. Avidan, “Diving into haze-lines: Color restoration of underwater images,” in British Machine Vision Conf. (BMVC), Israel Institute, North America, pp. 723–731, 2017.
  10. T. Ye, D. Dong and W. Xu, “A novel two-step strategy based on white-balancing and fusion for underwater image enhancement,” IEEE Access, vol. 8, no. 2, pp. 217651–217670, 2020.
  11. S. Anwar, C. Li and F. Porikli, “Deep underwater image enhancement,” Computer Vision and Pattern Recognition, vol. 1, no. 1, pp. 1–10, 2018.
  12. C. Ancuti and C. Ancuti, “Single image dehazing by multi-scale fusion,” IEEE Transactions on Image Processing, vol. 22, no. 8, pp. 3271–3282, 2013.
  13. K. Barnard, V. Cardei and B. Funt, “A comparison of computational color constancy algorithms-part I: Experiments with image data, IEEE Trans Image Process, vol. 2, no. 9, pp. 505–513, 2002.
  14. X. Yadong, Y. Cheng and S. Beibei, “A novel multi-scale fusion framework for detail-preserving low-light image enhancement,” Information Sciences, vol. 548, no. 23, pp. 378–397, 2021.
  15. C. Li, S. Anwar and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, vol. 98, no. 22, pp. 107038–107049, 2020.
  16. P. Karen and G. Chen, “Human-visual-system-inspired underwater image quality measures,” IEEE Journal of Oceanic Engineering, vol. 41, no. 3, pp. 541–556, 20
  17. Y. Miao and S. Arcot, “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 213–217, 2015.
images This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.