Skip to content

Real-time facial emotions recognition model for music recommendation deployed as a Streamlit application

License

Notifications You must be signed in to change notification settings

stutisehgal/Face-Emotion-Recognition

 
 

Repository files navigation

Face-Emotion-Recognition

Real-time facial emotions recognition model for music recommendation

How we did dataset construction and preprocessing

Data Preprocessing Steps:

  1. Resizing the images to 48x48 (B/W color channel)
  2. Manually cleaning the datasets to remove incorrect expressions
  3. Spliting the data into train ,validation and test(80:10:10)
  4. Applying image augmentation using image data generator
  5. Haar Cascades to crop out only faces from the images from live feed while getting real-time predictions

The data is from -> https://www.kaggle.com/jonathanoheix/face-expression-recognition-dataset but we didn’t use the complete dataset as the data was imbalanced we picked out only 4 classes and we manually had to go through all the images in order to clean them and we finally split them into a ratio of 80:10:10 train:test:valid respectively. So the images are 48x48 gray scale images cropped to face using haarcascades. 28275 train 3530 train 3532 validation were the number of images taken from kaggle but the number of images used to train will vary as we have used image generator and manually cleaned was also. For the parameters used for image data generator you can check the model.ipynb.

Model construction

Deep Learning Model:

After manually pre-processing the dataset, by deleting duplicates and wrongly classified images, we come to the part where we use concepts of Convolutional Neural Network and Transfer learning to build and train a model that predicts the facial emotion of any person. The four face emotions are: Happy, Sad, Neutral and Angry.

The data is split in training and validation sets: 80% Training, 20% Validation. The data is then augmented accordingly using ImageDataGenerator.

VGG-16 was used as the transfer learning model. After importing it, we set layers.trainable as False, and select a favorable output layer, in this case, ‘block5_conv1’. This freezes the transfer learning model so that we can pre-train or ‘warm up’ the layers of our sequential model on the given data before starting the actual training. This helps the sequential model to adjust weights by training on a lower learning rate.

H5 files of the model

Setting the Hyper Parameters and constants (Only the best parameters are displayed below): • Batch size : 64

• Image Size : 48 x 48 x 3

• Optimizers : o RMS Prop (Pre-Train) o Adam

• Learning Rate : o Lr1 = 1e-5 (Pre-Train) o Lr2 = 1e-4

• Epochs o Epochs 1 = 30 (Pre-Train) o Epochs 2 = 25

• Loss : Categorical Crossentropy

Defining the Model: Using Sequential, the layers in the model are as follows: • GlobalAveragePooling2D • Flatten • Dense (256, activation: ‘relu’) • Dropout (0.4) • Dense (128, activation: ‘relu’) • Dropout (0.2) • Dense (4, activation: ‘softmax’) The pre-training is done by using RMSProp at learning rate: 1e-5 and for 30 epochs. After pre-training, we set layers.trainable as True for the whole model. Now the actual training will start. It is done by taking Adam optimizer at learning rate: 1e-4 for 25 epochs. We were able to achieve a decent validation accuracy of 75% and an accuracy of 85%. All the metrics observed during the model training are displayed on one plot:

Different Emotions Detected:

Emousic ~ Selenium Automation in Python for Music videos Recommendation based on detected facial emotion

Using Selenium automation in Python, whenever you make prediction from the constructed vgg16 model, you get word as emotion - 'ANGRY 😡', 'HAPPY 😀', 'NEUTRAL 😐', 'SAD 🙁' which is used in automation for parsing the YouTube webpages using a driver called as 'chromedriver' which automatically clicks the buttons you want as per your facial emotion detected, and redirects you to recommended YouTube video.

Instructions to run

  • Requirements:
  • The software requirements are listed below:

    • pillow

    • numpy==1.16.0

    • opencv-python-headless==4.2.0.32

    • streamlit

    • tensorflow

  • Download the zip file from our repository and unzip it at your desired location.

Enter the following line of code in your teminal to run the streamlit script

  • STEP 1:
$ pip install -r requirements.txt

command-1

  • STEP 2:
$ streamlit run app.py

command-2

  • STEP 3:You can now view your Streamlit app in your browser. Local URL: http://localhost:8501

Output

Implementation of Face emotion recognition

Contributors

License

License

Made with 💜 by DS Community SRM

Rakesh

Your Name Here (Insert Your Image Link In Src

Stuti Sehgal

Your Name Here (Insert Your Image Link In Src

Bhavya

Your Name Here (Insert Your Image Link In Src

Shubhangi Soni

Your Name Here (Insert Your Image Link In Src

Sheel

Your Name Here (Insert Your Image Link In Src

Soumya

Your Name Here (Insert Your Image Link In Src

Krish

Your Name Here (Insert Your Image Link In Src

vignesh

Your Name Here (Insert Your Image Link In Src

About

Real-time facial emotions recognition model for music recommendation deployed as a Streamlit application

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.3%
  • Python 0.7%