Open Access iconOpen Access

ARTICLE

DSGNN: Dual-Shield Defense for Robust Graph Neural Networks

Xiaohan Chen1, Yuanfang Chen1,*, Gyu Myoung Lee2, Noel Crespi3, Pierluigi Siano4

1 School of Cyberspace, Hangzhou Dianzi University, Hangzhou, 310018, China
2 School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, L3 3AF, UK
3 Telecom SudParis, Institut Polytechnique de Paris, Evry, 91011, France
4 Department of Management & Innovation Systems, University of Salerno, Salerno, 84084, Italy

* Corresponding Author: Yuanfang Chen. Email: email

Computers, Materials & Continua 2025, 85(1), 1733-1750. https://doi.org/10.32604/cmc.2025.067284

Abstract

Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused by incomplete data, malicious attacks, or inherent noise, pose substantial threats to the stable and reliable performance of traditional GNN models. To address this issue, this study proposes the Dual-Shield Graph Neural Network (DSGNN), a defense model that simultaneously mitigates structural and feature perturbations. DSGNN utilizes two parallel GNN channels to independently process structural noise and feature noise, and introduces an adaptive fusion mechanism that integrates information from both pathways to generate robust node representations. Theoretical analysis demonstrates that DSGNN achieves a tighter robustness boundary under joint perturbations compared to conventional single-channel methods. Experimental evaluations across Cora, CiteSeer, and Industry datasets show that DSGNN achieves the highest average classification accuracy under various adversarial settings, reaching 81.24%, 71.94%, and 81.66%, respectively, outperforming GNNGuard, GCN-Jaccard, GCN-SVD, RGCN, and NoisyGNN. These results underscore the importance of multi-view perturbation decoupling in constructing resilient GNN models for real-world applications.

Keywords

Graph neural networks; adversarial attacks; dual-shield defense; certified robustness; node classification

Cite This Article

APA Style
Chen, X., Chen, Y., Lee, G.M., Crespi, N., Siano, P. (2025). DSGNN: Dual-Shield Defense for Robust Graph Neural Networks. Computers, Materials & Continua, 85(1), 1733–1750. https://doi.org/10.32604/cmc.2025.067284
Vancouver Style
Chen X, Chen Y, Lee GM, Crespi N, Siano P. DSGNN: Dual-Shield Defense for Robust Graph Neural Networks. Comput Mater Contin. 2025;85(1):1733–1750. https://doi.org/10.32604/cmc.2025.067284
IEEE Style
X. Chen, Y. Chen, G. M. Lee, N. Crespi, and P. Siano, “DSGNN: Dual-Shield Defense for Robust Graph Neural Networks,” Comput. Mater. Contin., vol. 85, no. 1, pp. 1733–1750, 2025. https://doi.org/10.32604/cmc.2025.067284



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2588

    View

  • 2071

    Download

  • 0

    Like

Share Link