Open Access iconOpen Access

ARTICLE

crossmark

An Efficient Text-Independent Speaker Identification Using Feature Fusion and Transformer Model

Arfat Ahmad Khan1, Rashid Jahangir2,*, Roobaea Alroobaea3, Saleh Yahya Alyahyan4, Ahmed H. Almulhi3, Majed Alsafyani3, Chitapong Wechtaisong5

1 College of Computing, Khon Kaen University, Khon Kaen, 40000, Thailand
2 Department of Computer Science, COMSATS University Islamabad, Vehari Campus, Vehari, 61100, Pakistan
3 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif, 21944, Saudi Arabia
4 Department of Computer Science, Community College in Dwadmi, Sharqa University, Dawadmi, 17472, Saudi Arabia
5 School of Telecommunication Engineering, Suranaree University of Technology, Nakhon Ratchasima, 30000, Thailand

* Corresponding Author: Rashid Jahangir. Email: email

Computers, Materials & Continua 2023, 75(2), 4085-4100. https://doi.org/10.32604/cmc.2023.036797

Abstract

Automatic Speaker Identification (ASI) involves the process of distinguishing an audio stream associated with numerous speakers’ utterances. Some common aspects, such as the framework difference, overlapping of different sound events, and the presence of various sound sources during recording, make the ASI task much more complicated and complex. This research proposes a deep learning model to improve the accuracy of the ASI system and reduce the model training time under limited computation resources. In this research, the performance of the transformer model is investigated. Seven audio features, chromagram, Mel-spectrogram, tonnetz, Mel-Frequency Cepstral Coefficients (MFCCs), delta MFCCs, delta-delta MFCCs and spectral contrast, are extracted from the ELSDSR, CSTR-VCTK, and Ar-DAD, datasets. The evaluation of various experiments demonstrates that the best performance was achieved by the proposed transformer model using seven audio features on all datasets. For ELSDSR, CSTR-VCTK, and Ar-DAD, the highest attained accuracies are 0.99, 0.97, and 0.99, respectively. The experimental results reveal that the proposed technique can achieve the best performance for ASI problems.

Keywords


Cite This Article

APA Style
Khan, A.A., Jahangir, R., Alroobaea, R., Alyahyan, S.Y., Almulhi, A.H. et al. (2023). An efficient text-independent speaker identification using feature fusion and transformer model. Computers, Materials & Continua, 75(2), 4085-4100. https://doi.org/10.32604/cmc.2023.036797
Vancouver Style
Khan AA, Jahangir R, Alroobaea R, Alyahyan SY, Almulhi AH, Alsafyani M, et al. An efficient text-independent speaker identification using feature fusion and transformer model. Comput Mater Contin. 2023;75(2):4085-4100 https://doi.org/10.32604/cmc.2023.036797
IEEE Style
A.A. Khan et al., "An Efficient Text-Independent Speaker Identification Using Feature Fusion and Transformer Model," Comput. Mater. Contin., vol. 75, no. 2, pp. 4085-4100. 2023. https://doi.org/10.32604/cmc.2023.036797



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 824

    View

  • 382

    Download

  • 1

    Like

Share Link