Skip to content
Melissa edited this page Apr 11, 2019 · 4 revisions

Welcome to the capstone wiki!

In this capstone project, we will be building an accessibility app that provides voice instructions/notifications for people with visual impairment when walking. This app reports possible obstacles by analyzing pictures/videos of the ahead environment captured from a camera source, ie a wearable tool or phone's camera input.

Background / Motivation

Our first idea for capstone was to create an iOS app to enhance quality of life, to act as an assistant/servant for everyday life because an iOS app is very accessible to the public -- everybody can download it. We found that all kinds of new iOS apps are being developed and commercialized rapidly for lifestyle improvement, but there is a major gap in iOS apps to provide more support for people with disabilities to live a better life, which could be a great idea academically for our capstone project and also an opportunity business wise. With the cutting-edge technology now, there is so much more we can do to help visually or hearing impaired people to see, hear, and say things more clearly, to help physically challenged people to walk or to use their hands as if they have the full capability.

To narrow down the scope of our project, we decided to focus on vision. The idea of recognizing potential obstacles on the street for visually impaired is initially inspired from experiences in my life. A year ago, my grandma tripped on the street because she didn’t realized that there was a curb on the sidewalk. As the result, she had to recover at the hospital for almost 2 months. Another story that inspired us is that a colleague of mine who suffers from partial visual disability has to be guided by another colleague or service dog everytime I see him in the hallway. Therefore, we feel like we can use our knowledge and do our part to build an app that can provide accessibility for people with visual challenges to live a better life.

Initial High Level Design

Hardware:

  • Take video input
    • OpenEyeTap
    • Phone's camera
  • How should we handle the input?
  • How is the device connected to our server?
  • Output to iPhone?

Software:

  • Language

    • Swift
    • C#
  • API

  • IOS app

  • Machine Learning

    • How to recognize the object

Work Plan

  • Create IOS APP
  • recognize boxes
  • add layers to determine object
  • train
  • output back to APP with voice