Skip to content

It is a Hill Climb Racing Game Controller. The game runs as we move our hand in front of the Primary Camera. It uses Machine Learning to infer 21 3-Dimensional Landmarks of our hand using Mediapipe's State-of-the-art techniques.

Notifications You must be signed in to change notification settings

thestarsahil/Game-Controller-Using-Hand-Gestures

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hill Climb Racing Game Controller

 

Hill Climb Racing Game Controller

Github Top Language Github Language Count Repository Size

About   |   How To Use   |   Features   |   Technologies   |   Requirements   |   Starting   |   Made By   |   Author


🎯 About

I have made a Hill Climb Racing Game Controller. The game runs as we move our hand in front of the Primary Camera. It uses Machine Learning to infer 21 3-Dimensional Landmarks of our hand using Mediapipe's State-of-the-art techniques.

🎯 How To Use

Basic Interface of Window:-

Basic Interface of Window

Playing the Game:-

Playing.The.Game.mp4

✨ Features

✔️ Uses Open Computer Vision (OpenCV)
✔️ Tracks hands and Finger Tips Efficiently
✔️ Moves the vehicle according to hand gestures.
✔️ Can operate Hill Climb Racing game without touching the laptop.

🚀 Technologies

The following tools were used in this project:

✅ Requirements

Before starting, you need to have Git & basic Deep Learning libraries installed. You should also install Hill Climb Racing by going to This Link

🏁 Starting

# Clone this project
$ git clone https://github.com/UtkarshPrajapati/Game-Controller-Using-Hand-Gestures.git

# Access
$ cd Game-Controller-Using-Hand-Gestures

# Install dependencies
$ pip install -r requirements.txt

# Run the project
$ jupyter nbconvert --to notebook --execute "Game Controller.ipynb"

📝 Made By

Made with ❤️ by Utkarsh Prajapati

 

Back to top

About

It is a Hill Climb Racing Game Controller. The game runs as we move our hand in front of the Primary Camera. It uses Machine Learning to infer 21 3-Dimensional Landmarks of our hand using Mediapipe's State-of-the-art techniques.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%