Open Access
ARTICLE
VMFD: Virtual Meetings Fatigue Detector Using Eye Polygon Area and Dlib Shape Indicator
1 Key Laboratory for Ubiquitous Network and Service Software, School of Software, Dalian University of Technology, Dalian, 116024, China
2 College of Engineering, Alfaisal University, Riyadh, 11533, Saudi Arabia
3 Institute of Human-Centred Computing (HCC), Graz University of Technology, Inffeldgasse 16c, Graz, 8010, Austria
4 School of Computing, Gachon University, Seongnam-si, 13120, Republic of Korea
5 Faculty of Engineering, Uni de Moncton, Moncton, NB E1A3E9, Canada
6 School of Electrical Engineering, University of Johannesburg, Johannesburg, 2006, South Africa
7 Research Unit, International Institute of Technology and Management (IITG), Av. Grandes Ecoles, Libreville, BP 1989, Gabon
* Corresponding Authors: Sghaier Guizani. Email: ; Ateeq Ur Rehman. Email:
Computers, Materials & Continua 2026, 86(3), 25 https://doi.org/10.32604/cmc.2025.071254
Received 03 August 2025; Accepted 06 October 2025; Issue published 12 January 2026
Abstract
Numerous sectors, such as education, the IT sector, and corporate organizations, transitioned to virtual meetings after the COVID-19 crisis. Organizations now seek to assess participants’ fatigue levels in online meetings to remain competitive. Instructors cannot effectively monitor every individual in a virtual environment, which raises significant concerns about participant fatigue. Our proposed system monitors fatigue, identifying attentive and drowsy individuals throughout the online session. We leverage Dlib’s pre-trained facial landmark detector and focus on the eye landmarks only, offering a more detailed analysis for predicting eye opening and closing of the eyes, rather than focusing on the entire face. We introduce an Eye Polygon Area (EPA) formula, which computes eye activity from Dlib eye landmarks by measuring the polygonal area of the eye opening. Unlike the Eye Aspect Ratio (EAR), which relies on a single distance ratio, EPA adapts to different eye shapes (round, narrow, or wide), providing a more reliable measure for fatigue detection. The VMFD system issues a warning if a participant remains in a fatigued condition for 36 consecutive frames. The proposed technology is tested under multiple scenarios, including low- to high-lighting conditions (50–1400 lux) and both with and without glasses. This study builds an OpenCV application in Python, evaluated using the iBUG 300-W dataset, achieving 97.5% accuracy in detecting active participants. We compare VMFD with conventional methods relying on the EAR and show that the EPA technique performs significantly better.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools