Date of Award

6-16-2017

Document Type

Thesis

Publisher

Santa Clara : Santa Clara University, 2017.

Departments

Electrical Engineering; Computer Engineering

First Advisor

Yi Fang

Second Advisor

Sally Wood

Abstract

Most products in the domain of assisting people with visual disabilities interpret text focus on the direct translation or dictation of text that is in front of a user. The focus is seldom on any type of textual understanding that goes beyond literal translation. In this project, we have developed the implementation of a novel wearable system that allows the visually impaired to have a better understanding of the textual world around them. Using the equivalent of a typical smartphone camera, a device captures a feed of the user’s surroundings. Pre-processing algorithms for adjusting white-balance and exposure and detecting blurriness are then employed to optimize the capture. The resulting images are sent to the user’s smartphone to be translated into text. Finally, the text is read aloud using an app that the user can control using touch and haptic feedback. The back-end of this system continuously learns from these captures over time to provide more significant and natural feedback based on a user’s semantic queries regarding the text before them. This document includes the requirements, design, use cases, risk tables, workflow and the architecture for the device we developed.

COinS