Skip to content

Lightweight and Fast Person Segmentation using Autoencoders (Trained Weights Included)

License

Notifications You must be signed in to change notification settings

animikhaich/Semantic-Segmentation-using-AutoEncoders

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

Semantic Segmentation using Auto Encoders

A Lightweight Human (Person) Segmentation Model built using Autoencoders, trained on COCO.
Model Notebook · Report Bug

Demo GIF

Table of Contents

About The Project

Inspired from UNet (Paper), which is a form of Autoencoder with Skip Connections, I wondered why can't a much shallower network create segmentation masks for a single object? Hence, the birth of this small project.

The primary goal of this is to determine if a shallow end-to-end CNN can learn complicated features like human beings. Hence, as a proof of concept, this notebook has been created.

The notebooks do not render properly on GitHub, hence please use the nbviewer links provided below to see the results.

Jupyter Notebooks - nbViewer

Dataset Information

Features

  • Pre Trained Weights - The weights can directly be downloaded from here: weights.h5 - It is stored using Git LFS.
  • Fast Inference - Inference Time for batch of 32 images of 512x512 dimensions with an Nvidia RTX 2080Ti is just 10.3 µs.

Results

Images (Left to Right): Input Image, Predicted Image, Thresholded Mask @ 0.5, Ground Truth Mask

Result 1 Result 2 Result 3 Result 4 Result 5 Result 6 Result 7 Result 8 Result 9 Result 10 Result 11 Result 12

How to Run

The experiment should be fairly reproducible. However, a GPU would be recommended for training. For Inference, a CPU System would suffice.

Hardware Used for the Experiment

  • CPU: AMD Ryzen 7 3700X - 8 Cores 16 Threads
  • GPU: Nvidia GeForce RTX 2080 Ti 11 GB
  • RAM: 32 GB DDR4 @ 3200 MHz
  • Storage: 1 TB NVMe SSD (This is not important, even a normal SSD would suffice)
  • OS: Ubuntu 20.10

Alternative Option: Google Colaboratory - GPU Kernel

Dataset Directory Structure (For Training)

  • Use the COCO API to extract the masks from the dataset. (Refer: Dataset Preparation.ipynb Notebook)
  • Save the masks in a directory as .jpg images.
  • Example Directory Structure:
.
├── images
│   ├── train
│   │   ├── *.jpg
│   └── val
│       └── *.jpg
└── masks
│   ├── train
│   │   ├── *.jpg
│   └── val
│       └── *.jpg

Built With

Simple List of Deep Learning Libraries. The main Architecture/Model is developed with Keras, which comes as a part of Tensorflow 2.x

Changelog

Since this is a Proof of Concept Project, I am not maintaining a CHANGELOG.md at the moment. However, the primary goal is to improve the architecture to make the predicted masks more accurate.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Animikh Aich