Skip to content

In this repository i am gonna walk you through evolution of face recognition algorithms using deep learning approach.

Notifications You must be signed in to change notification settings

surajkarki66/Face-Recognition-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Face-Recognition-Deep-Learning

Deep learning approach:

It all begun in 2014, when Facebook introduced DeepFace algorithm.

Evolution of face recoginition:

1) DeepFace

2) DeepID Series

3) VGGFace

4) FaceNet

5) VGGFace2 , etc

Screenshot from 2020-07-20 13-19-27

FR Module:

FR can be categorized as face verification and face identification. Face verification computes one to one similarity between gallery and probe to determine whether the two images are of the same object. Face identification compute one to many similarity to determine the specific identitis of the probe face.

When the probe appears in the gallery identity, this is reffered to as closed-set identification where as when th probes include those who are not in the gallery, this is open-set identification.

FR module consists of face proccessing, deep feature extraction and face matching. Which is shown by below expression.

M[F(Pi(Ii)), F(Pj(Ij))]

Where Ii and Ij are two face images. P stands for face proccessing to handle intra-personal variations (poses, illuminations, expressions, etc). F denotes feature extraction, which encodes the identity information. M means a face matching algorithm used to compute similarity scores.

Screenshot from 2020-07-20 13-20-05

1) Face Proccessing:

The face proccessing methods are categorized as "one to many" augmentation and "many to one" normalization.

a) One to many augmentation:

Generating many patches or images of pose variability from a single image to enable deep networks to learn pose-invariant representation.

b) Many to one normalization:

Recovering the canonical view of face images from one or many images of nonfrontal view; then, FR can be performed as if it were under controlled conditions.

processing

2) Deep Feature Extraction:

Using CNN architecture as backbone. Such as AlexNet, VGGNet, GoogleNet, ResNet and SENet.

Screenshot from 2020-07-20 12-34-36

Loss Function:

Many works focus on creating novel loss functions to make features not only more separable but also discriminative.

i) Euclidean Distance Based loss:

Compressing intra-variance and enlarging inter-variance based on Euclidean distance.

ii) Angular/Cosine-margin based loss:

Learning discriminative face features in terms of angular similarity, leading to potentially larger angular/cosine separability between learned feature.

iii) Softmax loss and its variations:

Directly using softmax loss or modifying it to improve performance. eg. L2 normalization.

Screenshot from 2020-07-20 13-20-59

3) Face Matching By Deep Features:

After the deep networks are trained with massive data and an appropriate loss function, each of the test images is passed through the networks to obtain a deep feature representation.

Once the deep features are extracted most methods directly calculate the similarity between two features using cosine distance or L2 distance.

Then the nearest neighbors & threhold comparision are used for both identification and verification tasks.

All Steps is cleared from the below image.

Screenshot from 2020-07-20 13-20-41

About

In this repository i am gonna walk you through evolution of face recognition algorithms using deep learning approach.

Topics

Resources

Stars

Watchers

Forks