Skip to content

This project is a custom implementation of VGGNet trained to classify images of faces into 7 classes, representing different emotions.

Notifications You must be signed in to change notification settings

arnavagrawal22/VGGNet-FER2013-Implementation

Repository files navigation

Facial Emotion Recognition (VGGNet)

Brief Description

Implemented in PyTorch

This project is a custom implementation of VGGNet trained to classify images of faces into 7 classes, representing different emotions.

The code is highly commented, explaining each step of the process.

Key Highlights

We ran only 35 epochs instead of 300, and our 35 epoch checkpoint can be accessed here 35-Epoch-Checkpoint

  • 62% Test Accuracy against 72% of official implementation
  • Live Webcam Demo
  • Ability to test custom images

Acknowledgement

This is implementation of the paper -

Facial Emotion Recognition: State of the Art Performance on FER2013

Resources Used for Data Generation/Loading and Live Webcam Demo

Dataset

FER2013

Code Structure

model.py : This file contains the VGGNet Model with slight modifications

data_generation.py : This file splits the data into test, train, and val.

data_loader.py : This is the custom dataset and dataset loader

main.py : This has the training loop, as well has we can test custom images , and it also has code to do a live webcam test.

How to Run?

  1. Preparation of data

  • Download the data
  • Unpack fer2013.csv
  • Place it in pwd/data/
  1. Uncomment the data generation lines (First Run only)
  2. By default, the model uses pre-trained parameters, to train the model yourself, you can uncomment the train function call
  3. By default, python main.py will open the webcam demo.

Thank You

About

This project is a custom implementation of VGGNet trained to classify images of faces into 7 classes, representing different emotions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages