Changrui Liu1, Dengpan Ye1, *, Yueyun Shang2, Shunzhi Jiang1, Shiyu Li1, Yuan Mei1, Liqiang Wang3
CMC-Computers, Materials & Continua, Vol.62, No.3, pp. 1365-1386, 2020, DOI:10.32604/cmc.2020.07421
Abstract Image classifiers that based on Deep Neural Networks (DNNs) have been
proved to be easily fooled by well-designed perturbations. Previous defense methods
have the limitations of requiring expensive computation or reducing the accuracy of the
image classifiers. In this paper, we propose a novel defense method which based on
perceptual hash. Our main goal is to destroy the process of perturbations generation by
comparing the similarities of images thus achieve the purpose of defense. To verify our
idea, we defended against two main attack methods (a white-box attack and a black-box
attack) in different DNN-based image classifiers and show that,… More >