Implemented in PyTorch
This project is a custom implementation of VGGNet trained to classify images of faces into 7 classes, representing different emotions.
The code is highly commented, explaining each step of the process.
We ran only 35 epochs instead of 300, and our 35 epoch checkpoint can be accessed here 35-Epoch-Checkpoint
- 62% Test Accuracy against 72% of official implementation
- Live Webcam Demo
- Ability to test custom images
Facial Emotion Recognition: State of the Art Performance on FER2013
model.py : This file contains the VGGNet Model with slight modifications
data_generation.py : This file splits the data into test, train, and val.
data_loader.py : This is the custom dataset and dataset loader
main.py : This has the training loop, as well has we can test custom images , and it also has code to do a live webcam test.
- Download the data
- Unpack fer2013.csv
- Place it in
pwd/data/
- Uncomment the data generation lines (First Run only)
- By default, the model uses pre-trained parameters, to train the model yourself, you can uncomment the train function call
- By default,
python main.py
will open the webcam demo.