Special lssues
Table of Content

Deep Learning for Image/Video Restoration and Compression

Submission Deadline: 03 January 2022 (closed)

Guest Editors

Dr. Mahendrakhan K, Hindusthan Institute of Technology, India.
Dr. Paulchamy Balaiyah, Hindusthan Institute of Technology, India.
Dr. Uma Maheshwari, Hindusthan Institute of Technology, India.
Dr. S. Prabu, Mahendra Institute of Technology, India.

Summary

The huge success of deep-learning-based approaches in computer vision motivated research in learned solutions to classic image/video processing issues, such as denoising, deblurring, super-resolution, and compression. Hence, learning based approaches have emerged as a promising nonlinear signal processing platform for image/video reconstruction and compression. Latest works have shown that trained models can make substantial performance improvements over conventional approaches. Hence, the state of the art of image reconstruction and compression is being redefined. However, there appear to be overcome compelling research obstacles. These include: I learned models include millions of parameters, which puts the inference of common devices into real time a challenge; ii) it is difficult to understand learned models or to achieve results; iii) it is important to have a loss role in training, which accurately reflects the human perception of quality; This special issue calls for creative architectures and training approaches for powerful image and video restore networks and compression to address these and other challenges


Keywords

New architectures for image and video restoration, including super-resolution, denoising, deblurring, dehazing, and inpainting.
● Novel learned methods for motion compensation and image/video compression.
● Computationally efficient networks for image/video restoration and compression.
● Explainable deep learning for image/video restoration and compression.
● Training with novel loss functions that accurately reflects human perception of quality.
● Robust methods on real-world image/video, where the training data is noisy and/or available training data is limited.

Share Link