It is observed that specially abled people are unable to communicate properly due to the lack of knowledge about their sign language amongst common people. This system aims at encountering this problem and proposing a feasible solution. The aim is to build a human responsive system that can aid specially abled (blind, deaf and aphonic) people by interpreting the sign language (i.e. hand gestures). Sign language of such people is mostly composed of hand gestures. Gesture recognition is a technique that uses the concepts of computer vision which is used for the interaction between human and computer or it can be for any automation implementation purpose. Further it is proposed to convert these gestures to text and speech so that it can be easily understood by others.
.ipynb file | Discription |
---|---|
VGG16 | VGG16 model Architecture |
VGG19 | VGG19 model Architecture |
ResNet50 | ResNet50 model Architecture |
TrainDataGenerator | Dataset Splitting using image data generator |
TrackBar | HSV TrackBar |
ObjectPathTracking | Track Centre of object and draw path of travelling |
LeNet5 | LeNet5 model Architecture |
LabellingToData | creaing Pickle |
ImageAugmentation | Image Augmentation |
ALPHABETLIVE | A-Z final Prediction |
DatasetDT | Distance Transformation after extracting skin |
final_skin_optimized | Skin pixel according to research Paper algo |