Skip to content

Latest commit

 

History

History
11 lines (8 loc) · 1.27 KB

README.md

File metadata and controls

11 lines (8 loc) · 1.27 KB

RoboND-Robotics-Inference

#Abstract

In this project, fully convolutional neural network made by AlexNet was trained with 30 epochs to classify sign-language numbers and translate them to the correct written form. DIGITS which is a platform made by nvidia for Neural Networks application helps in simplifying implementation of AlexNet, dealing with datasets, training and evaluating models. Also, to provide needed process power GPU workspace made by Udacity is used.

#Introduction

Although, people with difficulties in hearing can understand sign language easily. It is still hard for other people to understand them, this is because only very small percentage of us learn this language. Now with this advance in neural network. Neural networks can be used to translate sign language to its written format easily; it only needs time to be trained at the beginning, and after that with a good network it will perform in a Real time with no noticeable delays. Many Researches were made around this idea in the latest 10 years. Indeed there is already some applications that came to light. This project represent a prototype of neural network that could be implemented in Jetson TX2.This Network with jetson enables us to build a robot that can aid people with difficulties in hearing their life.