Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.076126
Special Issues
Table of Content

Open Access

ARTICLE

Adversarial Attack Defense in Graph Neural Networks via Multiview Learning and Attention-Guided Topology Filtering

Cheng Yang, Xianghong Tang*, Jianguang Lu, Chaobin Wang
State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
* Corresponding Author: Xianghong Tang. Email: email
(This article belongs to the Special Issue: Artificial Intelligence Methods and Techniques to Cybersecurity)

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.076126

Received 14 November 2025; Accepted 09 February 2026; Published online 27 February 2026

Abstract

Graph neural networks (GNNs) have demonstrated impressive capabilities in processing graph-structured data, yet their vulnerability to adversarial perturbations poses serious challenges to real-world applications. Existing defense methods often fail to handle diverse types of attacks and adapt to dynamic adversarial strategies because they typically rely on static defense mechanisms or focus narrowly on a single robustness dimension. To address these limitations, we propose an adversarial attention-based robustness strategy (AARS), which is a unified framework designed to enhance the robustness of GNNs against structural and feature perturbations. AARS operates in two stages: the first stage employs adversarial training with joint optimization to improve the resilience of the model to malicious attacks and stabilize its decision boundaries; the second stage incorporates an attention mechanism to identify critical structural dependencies and guides a topology filtering module that dynamically suppresses adversarial edges while preserving essential graph semantics. Extensive experiments on benchmark datasets for node classification demonstrate that AARS significantly outperforms existing baselines in terms of classification accuracy under various attack scenarios, thereby effectively improving the robustness of GNNs in static and dynamic adversarial settings.

Keywords

Graph neural networks; adversarial robustness; adversarial training; attention mechanisms; adversarial attack defense
  • 52

    View

  • 8

    Download

  • 2

    Like

Share Link