Skip to content

Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval

Notifications You must be signed in to change notification settings

YiY-Xu/multimodal_vtt

 
 

Repository files navigation

Handle kaldi audio features

  • install kaldiio : pip install kaldiio
  • change data source path directory in preprocessing_audio.py
  • run preprocessing_audio.py

Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval

Code for the video-text retrieval methods from "Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval" (Mithun, Niluthpol C and Li, Juncheng and Metze, Florian and Roy-Chowdhury, Amit K) 2018

Dependencies

This code is written in python. The necessary packages are below:

  • Python 2.7
  • PyTorch (>0.3)
  • Tensorboard
  • NLTK Punkt Sentence Tokenizer

Evaluate Models

-- Download data and models from https://drive.google.com/drive/folders/1t3MwiCR72HDo6XiPvWSZpenqv4CGjnKl -- To evaluate on MSR-VTT dataset : python test_weighted.py

Reference

If you use our code or models, please cite the following paper:

@inproceedings{mithun2018learning, title={Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval}, author={Mithun, Niluthpol C and Li, Juncheng and Metze, Florian and Roy-Chowdhury, Amit K}, booktitle={ICMR}, year={2018}, organization={ACM} }

--This code is built on top of VSE++(https://github.com/fartashf/vsepp)

-- Contact: Niluthpol Chowdhury Mithun (nmith001@ucr.edu)

About

Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%