Skip to content

Latest commit

 

History

History
24 lines (20 loc) · 1.15 KB

File metadata and controls

24 lines (20 loc) · 1.15 KB

Video Captioning For Arabic Sign Language Recognition At Sentence Level

Table of Contents

  1. Desription
  2. Dataset
  3. Installation and Usage

Description

An encoder-decoder deep learning model (with/without attention mechanism) where the input is an arabic sign-language video and the output is its translation in text format.

Note: For a detailed model architecture and preprocesing, refer Video Captioning.ipynb file.

Dataset

The dataset consists of

  1. Total number of video samples are 534.
  2. 10 different sentences performed by three signers.
  3. Each video sample is already normalized to 80 frames.

Installation and Usage

  • Requirements
    • python >= 3.6
  • git clone https://github.com/AI-14/video-captioning-for-arabic-sign-language-recognition-at-sentence-level.git - clones the repository
  • cd video-captioning-for-arabic-sign-language-recognition-at-sentence-level
  • py -m venv yourVenvName - creates a virtual environment
  • pip install -r requirements.txt - installs all modules