Open Access
ARTICLE
A Cooperative Hybrid Learning Framework for Automated Dandruff Severity Grading
1 Graduate Institute of Intelligent Manufacturing Technology, National Taiwan University of Science and Technology, Taipei, 106335, Taiwan
2 Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei, 106335, Taiwan
3 Department of Computer Science and Information Engineering, National Ilan University, Yilan, 26047, Taiwan
4 Office of Research and Industry-Academia Development, Chaoyang University of Technology, Taichung City, 413310, Taiwan
5 Department Informatics, Universitas Atma Jaya Yogyakarta, Yogyakarta, 55281, Indonesia
* Corresponding Authors: Chih-Hsien Hsia. Email: ; Yung-Yao Chen. Email:
Computers, Materials & Continua 2026, 87(1), 95 https://doi.org/10.32604/cmc.2026.072633
Received 31 August 2025; Accepted 04 January 2026; Issue published 10 February 2026
Abstract
Automated grading of dandruff severity is a clinically significant but challenging task due to the inherent ordinal nature of severity levels and the high prevalence of label noise from subjective expert annotations. Standard classification methods fail to address these dual challenges, limiting their real-world performance. In this paper, a novel, three-phase training framework is proposed that learns a robust ordinal classifier directly from noisy labels. The approach synergistically combines a rank-based ordinal regression backbone with a cooperative, semi-supervised learning strategy to dynamically partition the data into clean and noisy subsets. A hybrid training objective is then employed, applying a supervised ordinal loss to the clean set. The noisy set is simultaneously trained using a dual-objective that combines a semi-supervised ordinal loss with a parallel, label-agnostic contrastive loss. This design allows the model to learn from the entire noisy subset while using contrastive learning to mitigate the risk of error propagation from potentially corrupt supervision. Extensive experiments on a new, large-scale, multi-site clinical dataset validate our approach. The method achieves state-of-the-art performance with 80.71% accuracy and a 76.86% F1-score, significantly outperforming existing approaches, including a 2.26% improvement over the strongest baseline method. This work provides not only a robust solution for a practical medical imaging problem but also a generalizable framework for other tasks plagued by noisy ordinal labels.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools