Table of Content

Open Access iconOpen Access

ARTICLE

Joint Self-Attention Based Neural Networks for Semantic Relation Extraction

Jun Sun1, Yan Li1, Yatian Shen1,*, Wenke Ding1, Xianjin Shi1, Lei Zhang1, Xiajiong Shen1, Jing He2

School of Computer and Information Engineering, Henan University, Kaifeng, 475000, China.
The Corporate and Investment, Bank Technology, J. P. Morgan Chase N. A. 25 Bank St, Canary Wharf, London, E145JP, United Kingdom.

*Corresponding Author: Yatian Shen. Email: email.

Journal of Information Hiding and Privacy Protection 2019, 1(2), 69-75. https://doi.org/10.32604/jihpp.2019.06357

Abstract

Relation extraction is an important task in NLP community. However, some models often fail in capturing Long-distance dependence on semantics, and the interaction between semantics of two entities is ignored. In this paper, we propose a novel neural network model for semantic relation classification called joint self-attention bi-LSTM (SA-Bi-LSTM) to model the internal structure of the sentence to obtain the importance of each word of the sentence without relying on additional information, and capture Long-distance dependence on semantics. We conduct experiments using the SemEval-2010 Task 8 dataset. Extensive experiments and the results demonstrated that the proposed method is effective against relation classification, which can obtain state-of-the-art classification accuracy just with minimal feature engineering.

Keywords


Cite This Article

J. Sun, Y. Li, Y. Shen, W. Ding, X. Shi et al., "Joint self-attention based neural networks for semantic relation extraction," Journal of Information Hiding and Privacy Protection, vol. 1, no.2, pp. 69–75, 2019.

Citations




cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2898

    View

  • 1517

    Download

  • 2

    Like

Share Link