Open Access
ARTICLE
Leveraging Unlabeled Corpus for Arabic Dialect Identification
1 School of Computer Science, Northwestern Polytechnical University, Xi’an, 710072, China
2 Department of Computer Science, Applied College, King Khalid University, Muhayil, 63311, Saudi Arabia
3 Computer Science Department, National University of Modern Languages, Faisalabad, 38000, Pakistan
* Corresponding Author: Mohammed Abdelmajeed. Email:
Computers, Materials & Continua 2025, 83(2), 3471-3491. https://doi.org/10.32604/cmc.2025.059870
Received 18 October 2024; Accepted 21 January 2025; Issue published 16 April 2025
Abstract
Arabic Dialect Identification (DID) is a task in Natural Language Processing (NLP) that involves determining the dialect of a given piece of text in Arabic. The state-of-the-art solutions for DID are built on various deep neural networks that commonly learn the representation of sentences in response to a given dialect. Despite the effectiveness of these solutions, the performance heavily relies on the amount of labeled examples, which is labor-intensive to attain and may not be readily available in real-world scenarios. To alleviate the burden of labeling data, this paper introduces a novel solution that leverages unlabeled corpora to boost performance on the DID task. Specifically, we design an architecture that enables learning the shared information between labeled and unlabeled texts through a gradient reversal layer. The key idea is to penalize the model for learning source dataset-specific features and thus enable it to capture common knowledge regardless of the label. Finally, we evaluate the proposed solution on benchmark datasets for DID. Our extensive experiments show that it performs significantly better, especially, with sparse labeled data. By comparing our approach with existing Pre-trained Language Models (PLMs), we achieve a new state-of-the-art performance in the DID field. The code will be available on GitHub upon the paper’s acceptance.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.