Date of Award

6-9-2025

Document Type

Thesis

Publisher

Santa Clara : Santa Clara University, 2025

Departments

Computer Science and Engineering; Electrical and Computer Engineering

First Advisor

Ahmed Amer

Second Advisor

Maria Kyrarini

Abstract

In this paper, we propose a novel augmentative and alternative communication (AAC) framework for silent speech. Many individuals with speech impairments are unable to vocalize effectively due to various conditions that affect the vocal cords. To engage in social activities, many rely on AAC devices that often lack flexibility and expressiveness. Users may still find self-expression and spontaneity difficult with such devices. This project presents a novel approach to developing a silent speech interface (SSI), providing a more adaptable and user-centered solution to give the vocally impaired a voice. Using surface electromyography (sEMG) alongside machine learning techniques, we aim to map neuromuscular signals produced by sub-vocalizations to audible phonemes, the smallest unit of speech. Sub-vocalizations are described as inner speech, the process of silently pronouncing words in one’s mind while reading. Unlike most alternative and augmentative speech devices, which are constrained to a set of predetermined and commonly used words, our approach would be able to provide unrestricted vocabulary through the compositional construction of phonemes. Our framework utilizes non-invasive surface electromyography sensors placed on speech articulator muscle groups, so our model can learn associations between subtle myoelectric patterns and the phonemes that are produced. We conducted a user study to collect synchronized EMG and audio data. With the data from these participants, we trained our model to classify phonetic symbols from windows of EMG data.

Share

COinS