Open Access iconOpen Access

ARTICLE

crossmark

A Novel Unsupervised Structural Attack and Defense for Graph Classification

Yadong Wang1, Zhiwei Zhang1,*, Pengpeng Qiao2, Ye Yuan1, Guoren Wang1

1 School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China
2 School of Computing, Institute of Science Tokyo, Tokyo, 152-8550, Japan

* Corresponding Author: Zhiwei Zhang. Email: email

(This article belongs to the Special Issue: Advances in Deep Learning and Neural Networks: Architectures, Applications, and Challenges)

Computers, Materials & Continua 2026, 86(1), 1-22. https://doi.org/10.32604/cmc.2025.068590

Abstract

Graph Neural Networks (GNNs) have proven highly effective for graph classification across diverse fields such as social networks, bioinformatics, and finance, due to their capability to learn complex graph structures. However, despite their success, GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy. Existing adversarial attack strategies primarily rely on label information to guide the attacks, which limits their applicability in scenarios where such information is scarce or unavailable. This paper introduces an innovative unsupervised attack method for graph classification, which operates without relying on label information, thereby enhancing its applicability in a broad range of scenarios. Specifically, our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs. To effectively perturb the graphs, we then introduce an implicit estimator that measures the impact of various modifications on graph structures. The proposed strategy identifies and flips edges with the top-K highest scores, determined by the estimator, to maximize the degradation of the model’s performance. In addition, to defend against such attack, we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy. It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training. We conduct experiments on six public TU graph classification datasets: NCI1, NCI109, Mutagenicity, ENZYMES, COLLAB, and DBLP_v1, to evaluate the effectiveness of our attack and defense strategies. Under an attack budget of 3, the maximum reduction in model accuracy reaches 6.67% on the Graph Convolutional Network (GCN) and 11.67% on the Graph Attention Network (GAT) across different datasets, indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks. Meanwhile, our defense achieves the highest accuracy recovery of 3.89% (GCN) and 5.00% (GAT), demonstrating improved robustness against structural perturbations.

Keywords

Graph classification; graph neural networks; adversarial attack

Cite This Article

APA Style
Wang, Y., Zhang, Z., Qiao, P., Yuan, Y., Wang, G. (2026). A Novel Unsupervised Structural Attack and Defense for Graph Classification. Computers, Materials & Continua, 86(1), 1–22. https://doi.org/10.32604/cmc.2025.068590
Vancouver Style
Wang Y, Zhang Z, Qiao P, Yuan Y, Wang G. A Novel Unsupervised Structural Attack and Defense for Graph Classification. Comput Mater Contin. 2026;86(1):1–22. https://doi.org/10.32604/cmc.2025.068590
IEEE Style
Y. Wang, Z. Zhang, P. Qiao, Y. Yuan, and G. Wang, “A Novel Unsupervised Structural Attack and Defense for Graph Classification,” Comput. Mater. Contin., vol. 86, no. 1, pp. 1–22, 2026. https://doi.org/10.32604/cmc.2025.068590



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 511

    View

  • 186

    Download

  • 0

    Like

Share Link