Open Access iconOpen Access


Interpreting Randomly Wired Graph Models for Chinese NER

Jie Chen1, Jiabao Xu1, Xuefeng Xi1,*, Zhiming Cui1, Victor S. Sheng2

1 The School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
2 The School of Computer Science, Texas Tech University, Texas, USA

* Corresponding Author: Xuefeng Xi. Email: email

(This article belongs to the Special Issue: Advanced Intelligent Decision and Intelligent Control with Applications in Smart City)

Computer Modeling in Engineering & Sciences 2023, 134(1), 747-761.


Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose a Randomly Wired Graph Neural Network (RWGNN) by using graph to model the structure of Neural Network, which could solve two major problems (word-boundary ambiguity and polysemy) of Chinese NER. Besides, we develop a pipeline to explain the RWGNN by using Saliency Map and Adversarial Attacks. Experimental results demonstrate that our approach can identify meaningful and reasonable interpretations for hidden states of RWGNN.


Cite This Article

APA Style
Chen, J., Xu, J., Xi, X., Cui, Z., Sheng, V.S. (2023). Interpreting randomly wired graph models for chinese NER. Computer Modeling in Engineering & Sciences, 134(1), 747-761.
Vancouver Style
Chen J, Xu J, Xi X, Cui Z, Sheng VS. Interpreting randomly wired graph models for chinese NER. Comput Model Eng Sci. 2023;134(1):747-761
IEEE Style
J. Chen, J. Xu, X. Xi, Z. Cui, and V.S. Sheng "Interpreting Randomly Wired Graph Models for Chinese NER," Comput. Model. Eng. Sci., vol. 134, no. 1, pp. 747-761. 2023.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1374


  • 649


  • 0


Share Link