Skip to content

Web-Interface for the evaluation of the different GDSC entries.

Notifications You must be signed in to change notification settings

dkuehlwein/capgemini-gdsc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Capgemini - Global Data Science Challenge Evaluation Server

Web-Interface for the evaluation of different search engine models for GDS Challenge. All teams have been provided with GDSC Dataset which will be used for model evaluations.
More details on GDSC challenge can be found on this Yammer link.

Setting up project

  1. Clone the project and place the presentations folder from dataset in data folder in project.
  2. If you are using python virtual environment, it must be activated before running any following commands It can be activated with: source <path to virtual env>/bin/activate
  3. Navigate to gdsc_evaluation_server/ folder and enter following commands in sequence:
    • Create migrations: python website/manage.py makemigrations
    • Migrate to database: python website/manage.py migrate
    • Dump summaries to database: python scripts/summaries_data_dump.py
    • Create website administrator: python website/manage.py createsuperuser
    • Run server: python website/manage.py runserver 0.0.0.0:8000
  4. New users can be added on admin page 'localhost:8000/admin' and navigating to Users page.

Testing out models

  1. In order to test your model, it is advisable to first validate it by scripts/validate_model.py script.
  2. Once it is validated, you can place your model in models folder (See models/Steve-Nieve for help)
  3. In order to check the sample Steve-Nieve model, first go through this Readme file to complete the prerequisites.

Features of website

  1. The evaluation server selects two models and search according to the provided search query.
  2. The teams are selected based on the selection count and skill.
  3. Selected models can be viewed in server console against the type of model (python or R)
  4. The output shows the names of pptx files generated by each model with their brief summaries.
  5. PPT file names can be clicked and respective files can be downloaded to view in detail.
  6. The evaluator can select the winning team or 'Draw' the match according to results.
  7. The evaluator can also return to Search screen to re-enter search query without selecting a winner.

Authors

  • Gautam Kar - Initial Project Setup
  • Daniel Kühlwein - Project Management
  • Saad Abdullah Gondal - Project Development

About

Web-Interface for the evaluation of the different GDSC entries.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published