Open Access iconOpen Access


Defending Adversarial Examples by a Clipped Residual U-Net Model

Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

1Department of Computer Science, Faculty of Information Technology, University of Central Punjab Lahore, 54000, Pakistan
2 Department of Computer Science, COMSAT University Islamabad, Lahore Campus, Lahore, 54000, Pakistan
3 Faculty of Computers and Information Technology, Computer Science Department, University of Tabuk, Tabuk, 47711, Saudi Arabia

* Corresponding Author: Kazim Ali. Email: email

Intelligent Automation & Soft Computing 2023, 35(2), 2237-2256.


Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative Adversarial Networks Defense, Deep Denoising Sparse Autoencoder Defense, and Condtional Generattive Adversarial Network Defense. We have experimentally proved that our approach is better than previous defensive techniques. Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation. The proposed defensive approach is based on residual and U-Net learning. Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer vision field. We have also reported similarity (SSIM and PSNR) between the original and restored clean image examples by the proposed CRU-Net defense model.


Cite This Article

APA Style
Ali, K., Qureshi, A.N., Bhatti, M.S., Sohail, A., Hijji, M. (2023). Defending adversarial examples by a clipped residual u-net model. Intelligent Automation & Soft Computing, 35(2), 2237-2256.
Vancouver Style
Ali K, Qureshi AN, Bhatti MS, Sohail A, Hijji M. Defending adversarial examples by a clipped residual u-net model. Intell Automat Soft Comput . 2023;35(2):2237-2256
IEEE Style
K. Ali, A.N. Qureshi, M.S. Bhatti, A. Sohail, and M. Hijji "Defending Adversarial Examples by a Clipped Residual U-Net Model," Intell. Automat. Soft Comput. , vol. 35, no. 2, pp. 2237-2256. 2023.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1186


  • 565


  • 0


Share Link