Table of Content

Susceptibility to Adversarial Attacks and Defense in Deep Learning Systems

Submission Deadline: 31 December 2022 (closed)

Guest Editors

Dr. Steven L. Fernandes, Creighton University, USA.
Prof. Yu-Dong Zhang, University of Leicester, UK.
Dr. João Manuel R. S. Tavares, University of Porto, Portugal. 


A recent market report has predicted that through 2022, 30% of all cyberattacks against systems powered by deep learning (DL) will leverage training-data poisoning or adversarial attack. Due to strong monetary incentives and associated technological infrastructure, medical image analysis systems have recently been argued to be susceptible to adversarial attacks created from raw data to fool the DL systems such that it assigns the example to the wrong class but which are undetectable to the human eye. 


Adversarial attacks are not the only kind of malicious manipulation of input to DL systems that changes their predictions. Adversarial attacks are manipulations that aim to preserve the semantic contents of a given image, e.g., whether it is healthy or diseased, while changing the prediction of the network for the image. 

Besides this attack, images can also be modified to change their content. For example, signs of disease can be removed from a diseased image or added to a healthy image, causing network predictions to change. However, developing these synthetically altered images remains challenging, as it is difficult to guarantee they look realistic and to regulate which image structures are altered. These algorithms can be difficult to train and require huge training datasets. Cybercriminals invest huge amounts of money in motivating and training the hackers to carry out the attack. 

The motivation of this special issue is to solicit the efforts and ongoing research work in the domain of adversarial attacks on deep learning models in medical image analysis. The solution of defence mechanisms will play an important role in assisting researchers in designing firewalls and anti-spam systems. The special issue is keen to receive articles focused on translational research using deep learning necessary to defend the adversarial attack.


● Foundations of adversarial deep learning
● Algorithms for attacking with adversarial learning
● Generative Adversarial Networks
● Adversarial Training and Generative Modelling
● Robust feature leakage
● Feature visualizations
● Infrastructural and algorithmic solutions for retroactive identification
● Hypothetical fraudulent illustrations
● Ubiquitous computing against emerging vulnerabilities
● E-health, m-health and e-patient records
● Modelling of vulnerabilities and threats and their evaluation
● IT infrastructure for adversarial attacks
● Protection and detection techniques against black-box, white-box, and gray-box
adversarial attack robustness certification and property verification techniques
● Novel applications of adversarial learning and security
● Defenses against training/testing attacks
● Use of non-robust features for defence

Share Link

WeChat scan