Skip to content

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

License

Notifications You must be signed in to change notification settings

baronrustamov/AI_Visual_Stream

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI_Visual_Stream

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

Once the hands are tracked, there are series of functionalities that will be implemented. Some of them are:

  1. Loading images (like diagrams, use cases and workflows)
  2. ‘3-Dimensional graphics‘ to visualize the advanced curves that are hard to display on 2-D surfaces.
  3. Result analysis - Most of the video conferencing platforms don’t provide this, The system shows the result analysis of how each student performed and display them graphically with charts

IMG

IMG

Mobile View:

IMG

About

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • SCSS 44.1%
  • HTML 44.0%
  • Python 6.0%
  • CSS 3.4%
  • JavaScript 2.5%