Open Access iconOpen Access



Adaptive Backdoor Attack against Deep Neural Networks

Honglu He, Zhiying Zhu, Xinpeng Zhang*

School of Computer Science, Fudan University, Shanghai, 200433, China

* Corresponding Author: Xinpeng Zhang. Email: email

(This article belongs to this Special Issue: Cyberspace Intelligent Mapping and Situational Awareness)

Computer Modeling in Engineering & Sciences 2023, 136(3), 2617-2633.


In recent years, the number of parameters of deep neural networks (DNNs) has been increasing rapidly. The training of DNNs is typically computation-intensive. As a result, many users leverage cloud computing and outsource their training procedures. Outsourcing computation results in a potential risk called backdoor attack, in which a welltrained DNN would perform abnormally on inputs with a certain trigger. Backdoor attacks can also be classified as attacks that exploit fake images. However, most backdoor attacks design a uniform trigger for all images, which can be easily detected and removed. In this paper, we propose a novel adaptive backdoor attack. We overcome this defect and design a generator to assign a unique trigger for each image depending on its texture. To achieve this goal, we use a texture complexity metric to create a special mask for each image, which forces the trigger to be embedded into the rich texture regions. The trigger is distributed in texture regions, which makes it invisible to humans. Besides the stealthiness of triggers, we limit the range of modification of backdoor models to evade detection. Experiments show that our method is efficient in multiple datasets, and traditional detectors cannot reveal the existence of a backdoor.


Cite This Article

He, H., Zhu, Z., Zhang, X. (2023). Adaptive Backdoor Attack against Deep Neural Networks. CMES-Computer Modeling in Engineering & Sciences, 136(3), 2617–2633.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 913


  • 447


  • 0


Share Link