Skip to content

This project is an implementation of iris dataset identification using edge detection, Hough transform, and Daugman normalization. We also employ a Siamese network with contrastive loss for identification.

Notifications You must be signed in to change notification settings

sobhanshukueian/Iris-Identification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Iris Dataset Identification Project

your-gif

This project is an implementation of iris dataset identification using edge detection, Hough transform, and Daugman normalization. We also employ a Siamese network with contrastive loss for identification.

📝 Overview

Our implementation follows the following steps:

  1. 🎨 Data preprocessing: We preprocess the iris dataset using edge detection, Hough transform, and Daugman normalization to enhance the images and prepare them for identification.

  2. 🤖 Siamese network: We implement a Siamese network for identification, using contrastive loss to train the network.

  3. 📈 Visualization and Results: We present the results of our identification model, including accuracy and comparison with other methods.

🎨 Data Preprocessing

In order to enhance the iris images for identification, we employ three methods: edge detection, Hough transform, and Daugman normalization.

🖼️ Edge Detection

We use Canny edge detection to extract the edges of the iris image, which can help to remove noise and enhance the features of the iris.

🌀 Hough Transform

We use the Hough transform to detect the circular shape of the iris, which is important for accurate identification.

📣 Note

I implemented the hough transform algorithm using Numpy broadcasting and vectorizing, also by some modifications could achieve better speed and lower computations.

🌀 Daugman Normalization

I use Daugman normalization to transform the iris image into a polar representation, which makes it easier to compare with other iris images.

🤖 Siamese Network

We implement a Siamese network for identification, which takes in two iris images and outputs a similarity score. We use contrastive loss to train the network, which helps to optimize the similarity scores.

🚀 Future works and get started

Consider that I didn't train classifer on the learned representations by siamese you can use this repository to preprocess and learn representations of your dataset and by training a classifier get better results.

To get started with our iris dataset identification project, follow these steps:

  1. 🔗 Clone this repository: https://github.com/sobhanshukueian/Iris-Identification.git
  2. 🖼️ Run the preprocessing: preprocess.ipynb
  3. 🤖 Train the Siamese network: Siamese Identification.ipynb

📊 Visualization and Results (After 15 epochs training)

We visualize the iris images before and after preprocessing, as well as examples of the Siamese network's similarity scores.

Iris Images Before and After Preprocessing

image

Siamese Networks Euclidean distances

Embeddings of dimensions

♾️

Results without using classifier

⭐️ Please Star This Repo ⭐️

If you found this project useful or interesting, please consider giving it a star on GitHub! This helps other users discover the project and provides valuable feedback to the maintainers.

Thank you for your support!

About

This project is an implementation of iris dataset identification using edge detection, Hough transform, and Daugman normalization. We also employ a Siamese network with contrastive loss for identification.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published