Skip to content

A deep learning framework to assess squat depth from videos.

Notifications You must be signed in to change notification settings

rishic3/DepthCheck

Repository files navigation

DepthPerception

The Framework:

DepthPerception is a deep learning framework to assess squat depth in videos. It aims to automate the judging process in competitive powerlifting settings, and encourage proper depth and posture for non-competitive lifters.

The framework takes an input squat video. It applies Yolov3 to identify the lifter in the video, and employs MediaPipe Blazepose to apply a pose estimation across frames. It then extracts the estimated 3D coordinates to determine the knee and hip planes, computing a depth classification. A detailed walkthrough of the code can be found here.

Install Requirements

Install models and other dependencies with pip

pip install -r requirements.txt

Run the Demo

A demo video can be found in the global directory and is named squatExample.mov.

sideAngle.py and main.py can be run on this file like so:

python3 sideAngle.py squatExample.mov

or

python3 main.py squatExample.mov

The user will be prompted to optionally input a height for the subject in centimeters.
Input 180 (the height of the subject in the example video) for the model to compute depth discrepancies in real-world metrics, or type none.

Output

The frameworks will print a depth classification, as well as the displacement from depth, to console.

The frameworks will also produce two plots and an output video.
The first plot will display the results of the Yolov3 detection, like so:



The second plot will display the frame containing the deepest instance of the squat, like so:



If sideAngle.py is being used, the plot will also contain a line representing the depth threshold:



Both frameworks will also produce a file output.mp4 containing the parsed frames, with pose annotations, combined into an output video:

Files

helperFunctions.py contains helper functions used by both primary frameworks.
blazePoseDemo.py contains a demo implementation for the BlazePose estimation model.
yolov3Demo.py contains a demo implementation for the Yolov3 object detection model.

More video samples can be found in the data directory. Image samples used to test the BlazePose and Yolov3 demos are stored in the images directory.

References

Pyorch implementation of openpose including Body and Hand Pose Estimation, and the Pytorch model is directly converted from openpose caffemodel by caffemodel2pytorch.

Google's MediaPipe pose estimation was imported and used according to their documentation.

Yolov3 object detection was imported from gluoncv. The source code can be found here.

About

A deep learning framework to assess squat depth from videos.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages