-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Created the folder containing files for mask detection #941
Merged
Merged
Changes from 3 commits
Commits
Show all changes
8 commits
Select commit
Hold shift + click to select a range
6863233
Created the folder containing files for mask detection
Dharun235 e31c74a
updating Project-Structure.md
Dharun235 3026deb
Removed the dataset folder and added the link to dataset in readme file
Dharun235 fc044ce
Rename Model-training to Model-training.py
Dharun235 88a4588
updating Project-Structure.md
Dharun235 b3937af
Merge branch 'main' into mask-detection-project
Dharun235 df1bafc
Merge branch 'main' into mask-detection-project
Dharun235 b81efe2
Merge branch 'main' into mask-detection-project
Dharun235 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
import cv2 | ||
import tensorflow as tf | ||
import numpy as np | ||
|
||
# Load a pre-trained MobileNetV2 mask detection model (change to your model path or URL if available) | ||
model = tf.keras.models.load_model('mask_detector_mobilenetv2.h5') # Make sure the model is in the same folder or provide path | ||
|
||
# Function to preprocess the image for MobileNetV2 | ||
def preprocess_image(face): | ||
face_resized = cv2.resize(face, (224, 224)) # Resize to MobileNetV2 input size | ||
face_normalized = face_resized / 255.0 # Normalize pixel values | ||
face_expanded = np.expand_dims(face_normalized, axis=0) # Add batch dimension | ||
return face_expanded | ||
|
||
# Function to perform mask detection | ||
def detect_mask(frame): | ||
# Convert the frame to a tensor and preprocess it | ||
face_preprocessed = preprocess_image(frame) | ||
prediction = model.predict(face_preprocessed) | ||
|
||
# MobileNetV2 model output for binary classification (Mask / No Mask) | ||
mask_probability = prediction[0][0] # Get the single output probability for "with_mask" | ||
|
||
# Set the threshold for mask detection | ||
threshold = 0.5 | ||
label = "Mask" if mask_probability > threshold else "No Mask" | ||
color = (0, 255, 0) if label == "Mask" else (0, 0, 255) | ||
|
||
return label, color | ||
|
||
# Initialize video capture | ||
cap = cv2.VideoCapture(0) | ||
|
||
while True: | ||
ret, frame = cap.read() | ||
if not ret: | ||
break | ||
|
||
# Pre-process each frame for mask detection | ||
label, color = detect_mask(frame) | ||
|
||
# Draw label on the frame | ||
cv2.putText(frame, f'{label}', (30, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, color, 2) | ||
cv2.imshow("Mask Detection", frame) | ||
|
||
# Exit on 'q' key press | ||
if cv2.waitKey(1) & 0xFF == ord('q'): | ||
break | ||
|
||
cap.release() | ||
cv2.destroyAllWindows() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
import tensorflow as tf | ||
from tensorflow.keras.preprocessing.image import ImageDataGenerator | ||
from tensorflow.keras.applications import MobileNetV2 | ||
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D | ||
from tensorflow.keras.models import Model | ||
from tensorflow.keras.optimizers import Adam | ||
import os | ||
import cv2 | ||
import xml.etree.ElementTree as ET | ||
import numpy as np | ||
|
||
# Function to parse XML annotations | ||
def parse_annotations(annotations_path): | ||
data = [] | ||
|
||
for xml_file in os.listdir(annotations_path): | ||
if xml_file.endswith('.xml'): | ||
tree = ET.parse(os.path.join(annotations_path, xml_file)) | ||
root = tree.getroot() | ||
image_file = root.find('filename').text # Get image filename without path | ||
|
||
# For each object in the annotation | ||
objects = root.findall('object') | ||
for obj in objects: | ||
label = obj.find('name').text # Get label | ||
# Map labels to binary (1 for with_mask, 0 for without_mask) | ||
data.append((image_file, 1 if label == 'with_mask' else 0)) | ||
|
||
return data | ||
|
||
# Paths | ||
annotations_path = 'dataset/annotations' | ||
images_path = 'dataset/images' | ||
|
||
# Parse annotations to get image paths and labels | ||
data = parse_annotations(annotations_path) | ||
|
||
# Create a function to load images and their labels | ||
def load_image_and_label(image_name): | ||
image_path = os.path.join(images_path, image_name) | ||
|
||
# Check if the image exists | ||
if not os.path.exists(image_path): | ||
print(f"Image {image_name} not found.") | ||
return None, None | ||
|
||
image = cv2.imread(image_path) | ||
image = cv2.resize(image, (224, 224)) # Resize to MobileNetV2 input size | ||
image = image / 255.0 # Normalize image | ||
return image | ||
|
||
# Prepare the dataset | ||
images = [] | ||
labels = [] | ||
|
||
for image_name, label in data: | ||
img = load_image_and_label(image_name) | ||
if img is not None: # Only add valid images | ||
images.append(img) | ||
labels.append(label) | ||
|
||
images = np.array(images) | ||
labels = np.array(labels) | ||
|
||
# Create a training-validation split | ||
split_index = int(0.8 * len(images)) | ||
train_images = images[:split_index] | ||
train_labels = labels[:split_index] | ||
val_images = images[split_index:] | ||
val_labels = labels[split_index:] | ||
|
||
# Create ImageDataGenerators | ||
train_datagen = ImageDataGenerator( | ||
rotation_range=20, | ||
zoom_range=0.2, | ||
width_shift_range=0.2, | ||
height_shift_range=0.2, | ||
shear_range=0.2, | ||
horizontal_flip=True, | ||
fill_mode="nearest" | ||
) | ||
|
||
val_datagen = ImageDataGenerator() | ||
|
||
# Create generators | ||
train_generator = train_datagen.flow(train_images, train_labels, batch_size=32) | ||
val_generator = val_datagen.flow(val_images, val_labels, batch_size=32) | ||
|
||
# Load MobileNetV2 | ||
base_model = MobileNetV2(weights="imagenet", include_top=False, input_shape=(224, 224, 3)) | ||
|
||
# Freeze base model layers | ||
for layer in base_model.layers: | ||
layer.trainable = False | ||
|
||
# Add custom top layers | ||
x = base_model.output | ||
x = GlobalAveragePooling2D()(x) | ||
predictions = Dense(1, activation="sigmoid")(x) # Sigmoid for binary classification | ||
|
||
# Create model | ||
model = Model(inputs=base_model.input, outputs=predictions) | ||
|
||
# Compile model | ||
model.compile(optimizer=Adam(learning_rate=1e-4), loss="binary_crossentropy", metrics=["accuracy"]) | ||
|
||
# Train model | ||
epochs = 10 | ||
history = model.fit( | ||
train_generator, | ||
steps_per_epoch=len(train_images) // 32, | ||
validation_data=val_generator, | ||
validation_steps=len(val_images) // 32, | ||
epochs=epochs | ||
) | ||
|
||
# Save model | ||
model.save("mask_detector_mobilenetv2.h5") | ||
print("Model saved as mask_detector_mobilenetv2.h5") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,112 @@ | ||
# Mask Detection Using MobileNetV2 | ||
|
||
## Project Overview | ||
This project implements a real-time mask detection system using deep learning techniques. The model is based on the MobileNetV2 architecture, which is pre-trained on the ImageNet dataset. The system can classify individuals as wearing a mask or not wearing a mask using live video feeds from a webcam. | ||
|
||
## Table of Contents | ||
- [Features](#features) | ||
- [Requirements](#requirements) | ||
- [Dataset Structure](#dataset-structure) | ||
- [Installation](#installation) | ||
- [Training the Model](#training-the-model) | ||
- [Using the Model](#using-the-model) | ||
|
||
## Features | ||
- Real-time mask detection from webcam feed. | ||
- Utilizes MobileNetV2 for efficient image classification. | ||
- Simple and user-friendly interface. | ||
- Easy to modify and extend for other use cases. | ||
|
||
## Requirements | ||
- Python 3.6 or higher | ||
- TensorFlow | ||
- OpenCV | ||
- NumPy | ||
- Other standard libraries (included in the Python standard library) | ||
|
||
You can install the required libraries using pip: | ||
|
||
```bash | ||
pip install tensorflow opencv-python numpy | ||
``` | ||
|
||
## Dataset Structure | ||
The dataset should contain two main folders: `annotations` and `images`. The structure should look like this: | ||
Dataset can be downloaded using the link: https://www.kaggle.com/datasets/andrewmvd/face-mask-detection/data | ||
``` | ||
dataset/ | ||
│ | ||
├── annotations/ | ||
│ ├── example1.xml | ||
│ ├── example2.xml | ||
│ └── ... | ||
│ | ||
└── images/ | ||
├── example1.png | ||
├── example2.png | ||
└── ... | ||
``` | ||
|
||
### XML Annotation Format | ||
Each XML file corresponds to an image and should contain bounding box information and labels. Example XML structure: | ||
|
||
```xml | ||
<annotation> | ||
<folder>images</folder> | ||
<filename>example1.png</filename> | ||
<size> | ||
<width>512</width> | ||
<height>366</height> | ||
<depth>3</depth> | ||
</size> | ||
<object> | ||
<name>with_mask</name> | ||
<bndbox> | ||
<xmin>100</xmin> | ||
<ymin>150</ymin> | ||
<xmax>200</xmax> | ||
<ymax>250</ymax> | ||
</bndbox> | ||
</object> | ||
<object> | ||
<name>without_mask</name> | ||
<bndbox> | ||
<xmin>250</xmin> | ||
<ymin>150</ymin> | ||
<xmax>350</xmax> | ||
<ymax>250</ymax> | ||
</bndbox> | ||
</object> | ||
</annotation> | ||
``` | ||
|
||
## Installation | ||
1. Clone this repository to your local machine: | ||
```bash | ||
git clone <repository_url> | ||
cd <repository_name> | ||
``` | ||
|
||
2. Install the required libraries as mentioned above. | ||
|
||
## Training the Model | ||
To train the mask detection model, follow these steps: | ||
|
||
1. Prepare your dataset or use the dataset given in this folder. | ||
2. Use the provided training script (`Model-training.py`) to train the model: | ||
```bash | ||
python Model-training.py | ||
``` | ||
|
||
3. After training, the model will be saved as `mask_detector_mobilenetv2.h5`. | ||
|
||
## Using the Model | ||
To use the trained model for real-time mask detection: | ||
|
||
1. Run the mask detection script (`MaskDetect.py`): | ||
```bash | ||
python MaskDetect.py | ||
``` | ||
|
||
2. The webcam feed will open, and the model will classify whether individuals are wearing a mask or not in real-time. Press 'q' to exit. | ||
|
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Dharun235,
Please make sure that you check the extensions of the files before commiting. This file I suppose should be .py .
Update this and I will approve the PR.
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now I have done it.