Open AccessOpen Access


A Real-Time Oral Cavity Gesture Based Words Synthesizer Using Sensors

Palli Padmini1, C. Paramasivam1, G. Jyothish Lal2, Sadeen Alharbi3,*, Kaustav Bhowmick4

1 Department of Electronics & Communication Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India
2 Center for Computational Engineering and Networking (CEN), Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India
3 Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
4 Department of Electronics and Communication Engineering, PES University, Bengaluru, India

* Corresponding Author: Sadeen Alharbi. Email:

Computers, Materials & Continua 2022, 71(3), 4523-4554.


The present system experimentally demonstrates a synthesis of syllables and words from tongue manoeuvers in multiple languages, captured by four oral sensors only. For an experimental demonstration of the system used in the oral cavity, a prototype tooth model was used. Based on the principle developed in a previous publication by the author(s), the proposed system has been implemented using the oral cavity (tongue, teeth, and lips) features alone, without the glottis and the larynx. The positions of the sensors in the proposed system were optimized based on articulatory (oral cavity) gestures estimated by simulating the mechanism of human speech. The system has been tested for all English alphabets and several words with sensor-based input along with an experimental demonstration of the developed algorithm, with limit switches, potentiometer, and flex sensors emulating the tongue in an artificial oral cavity. The system produces the sounds of vowels, consonants, and words in English, along with the pronunciation of meanings of their translations in four major Indian languages, all from oral cavity mapping. The experimental setup also caters to gender mapping of voice. The sound produced from the hardware has been validated by a perceptual test to verify the gender and word of the speech sample by listeners, with ∼ 98% and ∼ 95% accuracy, respectively. Such a model may be useful to interpret speech for those who are speech-disabled because of accidents, neuron disorder, spinal cord injury, or larynx disorder.


Cite This Article

P. Padmini, C. Paramasivam, G. Jyothish Lal, S. Alharbi and K. Bhowmick, "A real-time oral cavity gesture based words synthesizer using sensors," Computers, Materials & Continua, vol. 71, no.3, pp. 4523–4554, 2022.

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1313


  • 879


  • 0


Share Link