Open Access iconOpen Access

ARTICLE

crossmark

Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing

Hyeong-Gyeong Kim1, Sang-Min Choi2, Hyeon Seo2, Suwon Lee2,*

1 Artificial Intelligence Research and Development Team, Nsquare, Jinju-si, 52828, Republic of Korea
2 Department of Computer Science and Engineering, Gyeongsang National University, Jinju-si, 52828, Republic of Korea

* Corresponding Author: Suwon Lee. Email: email

Computers, Materials & Continua 2025, 84(3), 4381-4397. https://doi.org/10.32604/cmc.2025.067024

Abstract

Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models. Existing defense mechanisms often suffer drawbacks, such as the need for model retraining, significant inference time overhead, and limited effectiveness against specific attack types. Achieving perfect defense against adversarial attacks remains elusive, emphasizing the importance of mitigation strategies. In this study, we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks. First, the image was randomly cropped to vary its dimensions and then placed at the center of a fixed 299 × 299 space, with the remaining areas filled with zero padding. Subsequently, Gaussian filtering with a 7 × 7 kernel and a standard deviation of two was applied using a convolution operation. Finally, the smoothed image was fed into the classification model. The proposed defense method consistently appeared in the upper-right region across all attack scenarios, demonstrating its ability to preserve classification performance on clean images while significantly mitigating adversarial attacks. This visualization confirms that the proposed method is effective and reliable for defending against adversarial perturbations. Moreover, the proposed method incurs minimal computational overhead, making it suitable for real-time applications. Furthermore, owing to its model-agnostic nature, the proposed method can be easily incorporated into various neural network architectures, serving as a fundamental module for adversarial defense strategies.

Keywords

Adversarial attacks; deep learning; artificial intelligence systems; random cropping; Gaussian filtering; image smoothing

Cite This Article

APA Style
Kim, H., Choi, S., Seo, H., Lee, S. (2025). Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing. Computers, Materials & Continua, 84(3), 4381–4397. https://doi.org/10.32604/cmc.2025.067024
Vancouver Style
Kim H, Choi S, Seo H, Lee S. Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing. Comput Mater Contin. 2025;84(3):4381–4397. https://doi.org/10.32604/cmc.2025.067024
IEEE Style
H. Kim, S. Choi, H. Seo, and S. Lee, “Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing,” Comput. Mater. Contin., vol. 84, no. 3, pp. 4381–4397, 2025. https://doi.org/10.32604/cmc.2025.067024



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1238

    View

  • 695

    Download

  • 0

    Like

Share Link