A sign language interpreter using live video feed from the camera.
-
Updated
Oct 20, 2024 - Python
A sign language interpreter using live video feed from the camera.
The primary objective of this project is to build a Real-Time Gesture Recognition Model. This model can be proposed as a baseline model for sign interpreter, which automatically converts sign language into written output to make communication for dumb people easy.
A sign language interpreter using live video feed from the camera.
Senior design project titled "Keyword Search for Sign Language" for the Special Project (EE 491-492) courses offered by Boğaziçi University.
Senior design project titled "Keyword Search for Sign Language" for the Special Project (EE 491-492) courses offered by Boğaziçi University.
This project will focus on sign language translation using wearable devices, which is able to help people having troubles with hearing and speaking in real world scenarios 😄
An automated ASL interpreter. The project will involve a robotic glove that detects hand movements in order to decipher ASL letters. The letters will be displayed on a LCD.
Sign2Sound is dedicated to revolutionizing communication for non-verbal individuals by seamlessly translating sign language gestures into understandable speech in real-time. By bridging the gap between sign language users and those unfamiliar with it, Sign2Sound promotes inclusivity and accessibility, ultimately enriching quality of life for all.
A YOLOv5 model developed from scratch to convey the signs to a blind perosn and can generate the text out from the signs made by mute person. It is a prototype to showcase the possibility on developing a interpreter for mute and blind people.
Add a description, image, and links to the sign-language-interpreter topic page so that developers can more easily learn about it.
To associate your repository with the sign-language-interpreter topic, visit your repo's landing page and select "manage topics."