Home / Journals / CMC / Online First / doi:10.32604/cmc.2026.080444
Special Issues
Table of Content

Open Access

ARTICLE

A Prosody-Guided Multi-Stream Framework for Universal Detection of AI-Synthesized Speech across Codec and Vocoder Domains

Akmalbek Abdusalomov1, Mukhriddin Mukhiddinov2,3, Fakhriddin Abdirazakov4, Alpamis Kutlimuratov5, Nodira Alimova6, Ilyos Kalandarov7, Ayhan Istanbullu8, Rashid Nasimov9, Young-Im Cho1,*
1 Department of Computer Engineering, Gachon University, Seongnam-si, Republic of Korea
2 Department of Industrial Management and Digital Technologies, Nordic International University, Tashkent, Uzbekistan
3 Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent, Uzbekistan
4 Department of Computer Systems, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent, Uzbekistan
5 Department of Applied Informatics, Kimyo International University in Tashkent, Tashkent, Uzbekistan
6 Department of Information Processing and Control Systems, Tashkent State Technical University, Tashkent, Uzbekistan
7 Department of Automation and Control, Navoi State University of Mining and Technologies, Navoi, Uzbekistan
8 Department of Computer Engineering, Faculty of Engineering, Balikesir University, Balikesir, Turkey
9 Department of Artificial Intelligence, Tashkent State University of Economics, Tashkent, Uzbekistan
* Corresponding Author: Young-Im Cho. Email: email

Computers, Materials & Continua https://doi.org/10.32604/cmc.2026.080444

Received 09 February 2026; Accepted 15 April 2026; Published online 27 April 2026

Abstract

Recent advancements in AI-synthesized speech have resulted in highly realistic deepfake audio, posing severe threats to authentication systems and digital media trust. Existing detection models struggle to generalize across diverse synthesis methods, especially those involving neural codec-based Audio Language Models (ALMs). In this work, we propose UniTector++, a novel prosody-aware, multi-stream detection architecture that generalizes across vocoder- and codec-based synthesis. UniTector++ incorporates three complementary streams—Whisper-based semantic embeddings, high-level prosodic features, and codec artifact representations—fused through a Multi-Domain Adaptive Graph Attention Fusion (MAGAF) module. Furthermore, an Emotion-Consistency Verification Module (ECVM) reinforces alignment between speech style and prosodic content, and a Universal Adversarial Robustness (UAR) head improves resistance against adversarial attacks. Evaluated on three benchmark datasets—ASVspoof2021, PolyFake, and Codecfake—UniTector++ achieves state-of-the-art performance with average Equal Error Rate (EER) of 0.57% under unseen synthesis scenarios, outperforming competitive baselines by a relative margin of 28%. Our results demonstrate the model’s superior generalization, interpretability, and robustness, offering a significant advancement in universal deepfake speech detection.

Keywords

Deepfake speech detection; prosody analysis; neural codec artifacts; whisper model; multi-stream fusion; emotion-consistency verification; AI-synthesized speech; spoofing detection
  • 215

    View

  • 37

    Download

  • 3

    Like

Share Link