Open Access

ARTICLE

Arabic Named Entity Recognition: A BERT-BGRU Approach

Norah Alsaaran*, Maha Alrabiah
Department of Computer Science, Imam Muhammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
* Corresponding Author: Norah Alsaaran. Email:
(This article belongs to this Special Issue: Deep Learning Trends in Intelligent Systems)

Computers, Materials & Continua 2021, 68(1), 471-485. https://doi.org/10.32604/cmc.2021.016054

Received 20 December 2020; Accepted 20 January 2021; Issue published 22 March 2021

Abstract

Named Entity Recognition (NER) is one of the fundamental tasks in Natural Language Processing (NLP), which aims to locate, extract, and classify named entities into a predefined category such as person, organization and location. Most of the earlier research for identifying named entities relied on using handcrafted features and very large knowledge resources, which is time consuming and not adequate for resource-scarce languages such as Arabic. Recently, deep learning achieved state-of-the-art performance on many NLP tasks including NER without requiring hand-crafted features. In addition, transfer learning has also proven its efficiency in several NLP tasks by exploiting pretrained language models that are used to transfer knowledge learned from large-scale datasets to domain-specific tasks. Bidirectional Encoder Representation from Transformer (BERT) is a contextual language model that generates the semantic vectors dynamically according to the context of the words. BERT architecture relay on multi-head attention that allows it to capture global dependencies between words. In this paper, we propose a deep learning-based model by fine-tuning BERT model to recognize and classify Arabic named entities. The pre-trained BERT context embeddings were used as input features to a Bidirectional Gated Recurrent Unit (BGRU) and were fine-tuned using two annotated Arabic Named Entity Recognition (ANER) datasets. Experimental results demonstrate that the proposed model outperformed state-of-the-art ANER models achieving 92.28% and 90.68% F-measure values on the ANERCorp dataset and the merged ANERCorp and AQMAR dataset, respectively.

Keywords

Named entity recognition; Arabic; deep learning; BGRU; BERT

Cite This Article

N. Alsaaran and M. Alrabiah, "Arabic named entity recognition: a bert-bgru approach," Computers, Materials & Continua, vol. 68, no.1, pp. 471–485, 2021.

Citations




This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2651

    View

  • 1638

    Download

  • 0

    Like

Share Link

WeChat scan