Special Issues
Table of Content

Deep Learning for Emotion Recognition

Submission Deadline: 30 March 2026 View: 558 Submit to Special Issue

Guest Editors

Dr. Thuseethan Selvarajah

Email: thuseethan.selvarajah@cdu.edu.au

Affiliation: Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0810, Australia

Homepage:

Research Interests: deep learning, emotion recognition

图片2.png


Dr Md Rafiqul Islam

Email: mdrafiqul.islam@cdu.edu.au

Affiliation: Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0810, Australia

Homepage:

Research Interests: data visualization, machine learning, pattern mining, deep learning

图片3.png


Summary

The field of emotion recognition is experiencing a paradigm shift with the rise of deep learning. Neural architectures now enable greater accuracy, robustness, and real-world applicability, moving beyond traditional rule-based and shallow learning approaches. Deep learning solutions capture complex patterns across diverse modalities, including facial expressions, speech, physiological signals, and text, enabling multimodal systems that can better interpret human affective states.

Recent advances, such as transformer-based models, contrastive learning, and self-supervised representations, are driving scalable and generalizable emotion recognition. Multimodal deep learning, integrating vision, audio, text, and physiological data, is proving especially effective in capturing nuanced emotional cues for applications in healthcare, education, entertainment, and human–computer interaction. Emerging trends like lightweight architectures for real-time use, federated learning for privacy-preserving analysis, and explainable AI are further improving the practicality and trustworthiness of deep learning-driven emotion recognition. Nonetheless, challenges remain, including data imbalance, cultural variability, annotation subjectivity, and ethical concerns.

This Special Issue aims to showcase the transformative potential of deep learning for emotion recognition by presenting recent advancements, innovative frameworks, and applied case studies. Contributions addressing methodological challenges, cross-cultural generalization, and integration of multimodal data are highly encouraged. Topics of interest include, but are not limited to, the following:
· Deep learning architectures for emotion recognition;
· Multimodal emotion recognition from speech, facial expressions, text, and physiological signals;
· Transformer-based and self-supervised approaches for affective computing;
· Contrastive learning and representation learning for emotion analysis;
· Lightweight and efficient deep learning models for real-time emotion recognition;
· Federated learning and privacy-preserving emotion recognition systems;
· Explainable and interpretable deep learning models in emotion recognition;
· Cross-cultural and domain adaptation in emotion recognition;
· Ethical considerations and responsible deployment of emotion recognition systems;


Keywords

Multimodal Emotion Recognition, Deep Learning Architectures, Transformer Models, Self-Supervised Learning, Federated Learning, Explainable AI (XAI), Affective Computing

Share Link