Open Access
ARTICLE
A Fast Two-Stage Black-Box Deep Learning Network Attacking Method Based on Cross-Correlation
Deyin Li1, 2, Mingzhi Cheng3, Yu Yang1, 2, *, Min Lei1, 2, Linfeng Shen4
1 State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, China.
2 School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing, 100876, China.
3 College of New Media, Beijing Institute of Graphic Communication, Beijing, 102600, China.
4 School of Computing Science, Simon Fraser University, Burnaby, Canada.
* Corresponding Author: Yu Yang. Email: .
Computers, Materials & Continua 2020, 64(1), 623-635. https://doi.org/10.32604/cmc.2020.09800
Received 19 January 2020; Accepted 05 April 2020; Issue published 20 May 2020
Abstract
Deep learning networks are widely used in various systems that require
classification. However, deep learning networks are vulnerable to adversarial attacks. The
study on adversarial attacks plays an important role in defense. Black-box attacks require
less knowledge about target models than white-box attacks do, which means black-box
attacks are easier to launch and more valuable. However, the state-of-arts black-box
attacks still suffer in low success rates and large visual distances between generative
adversarial images and original images. This paper proposes a kind of fast black-box
attack based on the cross-correlation (FBACC) method. The attack is carried out in two
stages. In the first stage, an adversarial image, which would be missclassified as the
target label, is generated by using gradient descending learning. By far the image may
look a lot different than the original one. Then, in the second stage, visual quality keeps
getting improved on the condition that the label keeps being missclassified. By using the
cross-correlation method, the error of the smooth region is ignored, and the number of
iterations is reduced. Compared with the proposed black-box adversarial attack methods,
FBACC achieves a better fooling rate and fewer iterations. When attacking LeNet5 and
AlexNet respectively, the fooling rates are 100% and 89.56%. When attacking them at
the same time, the fooling rate is 69.78%. FBACC method also provides a new
adversarial attack method for the study of defense against adversarial attacks.
Keywords
Cite This Article
D. Li, M. Cheng, Y. Yang, M. Lei and L. Shen, "A fast two-stage black-box deep learning network attacking method based on cross-correlation,"
Computers, Materials & Continua, vol. 64, no.1, pp. 623–635, 2020.
Citations