TY - EJOU AU - Lin, Bin AU - Gao, Fei AU - Zeng, Wenli AU - Chen, Jixin AU - Zhang, Cong AU - Zhu, Qinsheng AU - Zhou, Yong AU - Zheng, Desheng AU - Qiu, Qian AU - Yang, Shan TI - Enhancing the Adversarial Transferability with Channel Decomposition T2 - Computer Systems Science and Engineering PY - 2023 VL - 46 IS - 3 SN - AB - The current adversarial attacks against deep learning models have achieved incredible success in the white-box scenario. However, they often exhibit weak transferability in the black-box scenario, especially when attacking those with defense mechanisms. In this work, we propose a new transfer-based black-box attack called the channel decomposition attack method (CDAM). It can attack multiple black-box models by enhancing the transferability of the adversarial examples. On the one hand, it tunes the gradient and stabilizes the update direction by decomposing the channels of the input example and calculating the aggregate gradient. On the other hand, it helps to escape from local optima by initializing the data point with random noise. Besides, it could combine with other transfer-based attacks flexibly. Extensive experiments on the standard ImageNet dataset show that our method could significantly improve the transferability of adversarial attacks. Compared with the state-of-the-art method, our approach improves the average success rate from 88.2% to 96.6% when attacking three adversarially trained black-box models, demonstrating the remaining shortcomings of existing deep learning models. KW - Adversarial attack; transferability; black-box models; deep learning DO - 10.32604/csse.2023.034268