EmotionLens is a state-of-the-art system designed for real-time facial emotion recognition and object detection. By leveraging advanced deep learning architectures.
- Real-time facial emotion recognition using Residual Masking Network. used from this repo
- Object detection with YOLO for enhanced Emotion Results
- Tracker for tracking faces for enhancing the Emotion Results
EmotionLens/
├── assets/
│ ├── logo.png
│ ├── poster.jpg
├── utils/
│ ├── emotionlens.py
│ ├── sort.py
│ ├── general.py
├── config.py
├── main.py
├── README.md
├── requirements.txt
└── .gitignore
The face detection model For detecting faces. The used Model is RetinaFace
in batch mode for handling large number of faces in Real-Time mode.
EmotionLens employs the Residual Masking Network for emotion recognition. This deep learning model is specifically designed to interpret a wide range of emotional states from facial expressions. It categorizes emotions into angry, disgust, fear, happy, sad, surprise, and neutral.
The YOLO (You Only Look Once) object detection system is integrated to identify objects such as laptops, phones, and other related items. This addition enhances the context of the emotion recognition system, allowing for more accurate interpretation of engagement and behaviors.
The tracker module is responsible for tracking real-time faces from the face detection. This is for tacking the majority vote for each object as its final emotion. The used tracking algorithm is SORT (Simple Online Real-Time Tracking) from the following repo
- Clonse the Repository:
git clone https://github.com/baselhusam/EmotionLens.git
cd EmotionLens
- Install the necessary dependencies:
pip install -r requirements.txt
- Run the application:
python main.py
-
Face Detection: The model processes video frames to detect and localize faces.
-
Emotion Recognition: Detected faces are analyzed to recognize and categorize emotions.
-
Object Detection: The YOLO model identifies objects to provide context for emotional states.
The config.py
file allows customization of various parameters to tailor the system to specific requirements. Below is an overview of the configurable parameters:
-
SRC_VIDEO
: Video source (0 for webcam or path to video file) -
FILTER_EMOTIONS
: Filter emotions to be positive, negative, and neutral only (True/False) -
APPLY_TRACKER
: Apply tracker (True/False) -
APPLY_YOLO
: Apply YOLO object detection (True/False) -
YOLO_REQ_CLS
: List of required classes to filter YOLO results
The config.py
file ensures that the EmotionLens system is highly customizable to fit various operational requirements and environments.