Open AccessOpen Access


ELM-Based Shape Adaptive DCT Compression Technique for Underwater Image Compression

M. Jamuna Rani1,*, C. Vasanthanayaki2

1 Sona College of Technology, Salem, India
2 Government College of Engineering, Salem, India

* Corresponding Author: M. Jamuna Rani. Email:

Computer Systems Science and Engineering 2023, 45(2), 1953-1970.


Underwater imagery and transmission possess numerous challenges like lower signal bandwidth, slower data transmission bit rates, Noise, underwater blue/green light haze etc. These factors distort the estimation of Region of Interest and are prime hurdles in deploying efficient compression techniques. Due to the presence of blue/green light in underwater imagery, shape adaptive or block-wise compression techniques faces failures as it becomes very difficult to estimate the compression levels/coefficients for a particular region. This method is proposed to efficiently deploy an Extreme Learning Machine (ELM) model-based shape adaptive Discrete Cosine Transformation (DCT) for underwater images. Underwater color image restoration techniques based on veiling light estimation and restoration of images followed by Saliency map estimation based on Gray Level Co-occurrence Matrix (GLCM) features are explained. An ELM network is modeled which takes two parameters, signal strength and saliency value of the region to be compressed and level of compression (DCT coefficients and compression steps) are predicted by ELM. This method ensures lesser errors in the Region of Interest and a better trade-off between available signal strength and compression level.


Cite This Article

M. Jamuna Rani and C. Vasanthanayaki, "Elm-based shape adaptive dct compression technique for underwater image compression," Computer Systems Science and Engineering, vol. 45, no.2, pp. 1953–1970, 2023.

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 399


  • 311


  • 0


Share Link