Open Access

ARTICLE

Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

Kazim Ali1,*, Adnan N. Qureshi1, Ahmad Alauddin Bin Arifin2, Muhammad Shahid Bhatti3, Abid Sohail3, Rohail Hassan4
1 Department of Information Technology, University of Central Punjab, Lahore, 54000, Pakistan
2 Department of Communication Technology and Network, Faculty of Computer Science and Information Technology, University Putra Malaysia, Salengor, 43400, Malaysia
3 Department of Computer Science, Comsats University Islamabad, Lahore Campus, 54000, Pakistan
4 Othman Yeop Abdullah Graduate School of Business, University Utara Malaysia, Kuala Lumpur, 50300, Malaysia
* Corresponding Author: Kazim Ali. Email:

Computers, Materials & Continua 2022, 71(2), 2209-2224. https://doi.org/10.32604/cmc.2022.020111

Received 10 May 2021; Accepted 27 July 2021; Issue published 07 December 2021

Abstract

These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely affects the performance or prediction. In this scenario, we present a deep image restoration model that restores adversarial examples so that the target model is classified correctly again. We proved that our defense method against adversarial attacks based on a deep image restoration model is simple and state-of-the-art by providing strong experimental results evidence. We have used MNIST and CIFAR10 datasets for experiments and analysis of our defense method. In the end, we have compared our method to other state-of-the-art defense methods and proved that our results are better than other rival methods.

Keywords

Computer vision; deep learning; convolutional neural networks; adversarial examples; adversarial attacks; adversarial defenses

Cite This Article

K. Ali, A. N. Qureshi, A. Alauddin Bin Arifin, M. Shahid Bhatti, A. Sohail et al., "Deep image restoration model: a defense method against adversarial attacks," Computers, Materials & Continua, vol. 71, no.2, pp. 2209–2224, 2022.

Citations




This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 932

    View

  • 606

    Download

  • 0

    Like

Share Link

WeChat scan