The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.
Once the hands are tracked, there are series of functionalities that will be implemented. Some of them are:
- Loading images (like diagrams, use cases and workflows)
- ‘3-Dimensional graphics‘ to visualize the advanced curves that are hard to display on 2-D surfaces.
- Result analysis - Most of the video conferencing platforms don’t provide this, The system shows the result analysis of how each student performed and display them graphically with charts
Mobile View: