Facial expression for emotion detection has always been an easy task for humans, but achieving the same task with a computer algorithm is quite challenging. With the recent advancement in computer vision and machine learning, it is possible to detect emotions from images.In this project,we propose a novel technique called facial emotion recognition using convolutional neural networks,python and flask. Facial expressions are the vital identifiers for human feelings, because it corresponds to the emotions. Most of the times (roughly in 55% cases), the facial expression is a nonverbal way of emotional expression, and it can be considered as concrete evidence to uncover whether an individual is speaking the truth or not.
β³ Our Facial Expression Recognition Classifier Model can take input via following ways : π
- Real-time Video input
- Upload Images from the System
- Provide URL of the Image
- It predicts the Emotion of users and also gives Graphical Visualization of Emotions as shown above.
- Python
- Flask
- HTML, CSS
- Deep Learning (CNN)
- Fork this repository.
- Clone the repository to your System using
git clone
- Example :
git clone https://github.com<your-github-username>/Facial-Expression-Recognition-Classifier-Model
- Create a new Virtual Environment with python 3.7.0 version.
- Install all the dependencies with
pip install -r requirements.txt
. - Now run the
main.py
file. - Once it shows
Running on http://127.0.0.1:5000/
go to http://127.0.0.1:5000/ in your browser.
- Import the required Packages and Libraries.
- Data analysis and Creating Training and Validation Batches.
- Create a CNN using 4 Convolutional Layers including Batch Normalization, Activation, Max Pooling, Dropout Layers followed by Flatten Layer, 2 Fully Connected dense Layers and finally Dense Layer with SoftMax Activation Function.
- Compile the model using
Adam
Optimizer and categorical cross entropy loss function. - Training the model for 15 epochs and then Evaluating the model as well as
saving the model Weights in
.h5
Values - Saving the model as
JSON
string. - Creating a Class in a separate file to reload the model and its weights to make predictions and return the probabilities of each emotion.
- Creating one more class in a Separate file which takes in the
Real-time Video input
and returns frames of Images with a Circle detecting the face and putting text of its emotion on it. - A python script is also created which upon running yields the
Graphical
Visualization
of Emotions present in the Image provided. - Finally creating a file which inherits form all the Classes defined by us and deploys our application using Flask.
Go through the link If you are new to Open Source Contribution here on making your First Contribution !!
- Fork this repository
- Clone the repository to your System using
git clone https://github.com<your-github-username>/Facial-Expression-Recognition-Classifier-Model
- Create a branch :-
- Change to the repository directory on your computer
cd Facial-Expression-Recognition-Classifier-Model
- Now create a branch using the git checkout command:
git checkout -b your-new-branch-name
- Change to the repository directory on your computer
- Make changes as per your requirement to solve the Issues mentioned in the
Future scope of the Project
and commit those changes. - If you go to the project directory and execute the command git status, you'll see there are changes. Add those changes to the branch you just created using the
git add
- Now commit those changes using the git commit command:
git commit -m "Added the feature of Suggesting Music"
- Push your changes to GitHub using the command
git push origin <add-your-branch-name>
- If you go to your repository on GitHub, you'll see a Compare & pull request button. Click on that button.
- Now describe the changes you made and submit the
pull request
. - Wait for the Maintainers to review :)
Excited to contribute to the Project ? Head over Open Issues here
Thanks to all these wonderful developers who made this project awesome!:raised_hands:
This Project is part of the following programs :
You can find our Code of Conduct here.
This project follows the MIT License.