Open Access iconOpen Access

ARTICLE

Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering

Cheng Yang, Xianghong Tang*, Jianguang Lu, Chaobin Wang

State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China

* Corresponding Author: Xianghong Tang. Email: email

(This article belongs to the Special Issue: Artificial Intelligence Methods and Techniques to Cybersecurity)

Computers, Materials & Continua 2026, 87(3), 59 https://doi.org/10.32604/cmc.2026.076126

Abstract

Graph neural networks (GNNs) have demonstrated impressive capabilities in processing graph-structured data, yet their vulnerability to adversarial perturbations poses serious challenges to real-world applications. Existing defense methods often fail to handle diverse types of attacks and adapt to dynamic adversarial strategies because they typically rely on static defense mechanisms or focus narrowly on a single robustness dimension. To address these limitations, we propose an adversarial attention-based robustness strategy (AARS), which is a unified framework designed to enhance the robustness of GNNs against structural and feature perturbations. AARS operates in two stages: the first stage employs adversarial training with joint optimization to improve the resilience of the model to malicious attacks and stabilize its decision boundaries; the second stage incorporates an attention mechanism to identify critical structural dependencies and guides a topology filtering module that dynamically suppresses adversarial edges while preserving essential graph semantics. Extensive experiments on benchmark datasets for node classification demonstrate that AARS significantly outperforms existing baselines in terms of classification accuracy under various attack scenarios, thereby effectively improving the robustness of GNNs in static and dynamic adversarial settings.

Keywords

Graph neural networks; adversarial robustness; adversarial training; attention mechanisms; adversarial attack defense

Cite This Article

APA Style
Yang, C., Tang, X., Lu, J., Wang, C. (2026). Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering. Computers, Materials & Continua, 87(3), 59. https://doi.org/10.32604/cmc.2026.076126
Vancouver Style
Yang C, Tang X, Lu J, Wang C. Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering. Comput Mater Contin. 2026;87(3):59. https://doi.org/10.32604/cmc.2026.076126
IEEE Style
C. Yang, X. Tang, J. Lu, and C. Wang, “Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering,” Comput. Mater. Contin., vol. 87, no. 3, pp. 59, 2026. https://doi.org/10.32604/cmc.2026.076126



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 386

    View

  • 79

    Download

  • 0

    Like

Share Link