Open Access
ARTICLE
Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing
1 Artificial Intelligence Research and Development Team, Nsquare, Jinju-si, 52828, Republic of Korea
2 Department of Computer Science and Engineering, Gyeongsang National University, Jinju-si, 52828, Republic of Korea
* Corresponding Author: Suwon Lee. Email:
Computers, Materials & Continua 2025, 84(3), 4381-4397. https://doi.org/10.32604/cmc.2025.067024
Received 23 April 2025; Accepted 12 June 2025; Issue published 30 July 2025
Abstract
Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models. Existing defense mechanisms often suffer drawbacks, such as the need for model retraining, significant inference time overhead, and limited effectiveness against specific attack types. Achieving perfect defense against adversarial attacks remains elusive, emphasizing the importance of mitigation strategies. In this study, we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks. First, the image was randomly cropped to vary its dimensions and then placed at the center of a fixed 299 × 299 space, with the remaining areas filled with zero padding. Subsequently, Gaussian filtering with a 7 × 7 kernel and a standard deviation of two was applied using a convolution operation. Finally, the smoothed image was fed into the classification model. The proposed defense method consistently appeared in the upper-right region across all attack scenarios, demonstrating its ability to preserve classification performance on clean images while significantly mitigating adversarial attacks. This visualization confirms that the proposed method is effective and reliable for defending against adversarial perturbations. Moreover, the proposed method incurs minimal computational overhead, making it suitable for real-time applications. Furthermore, owing to its model-agnostic nature, the proposed method can be easily incorporated into various neural network architectures, serving as a fundamental module for adversarial defense strategies.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools