Skip to content

Self Supervised Learning for Time Series Using Similarity Distillation

License

Notifications You must be signed in to change notification settings

BorealisAI/ssl-for-timeseries

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation

This repository contains the official implementation of the paper "Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation" (ICML 2022 Pretraining Workshop).

Requirements

We use conda to create a new environment for running experiments, called atom-ssl-ts, using env.yaml:

conda env create -f env.yaml
conda activate atom-ssl-ts

We used precommit hooks (flake8 and black) to reformat the code but their use is optional. You don't need the hooks to reproduce the results of the paper. To use precommit hooks, simply run pre-commit install before commiting your code (that is based off of this project).

Data

We used 5 different datasets to evaluate our results. Use below links to download these datasets.

The electricity and KPI datasets need preprocessing. Simply run the preprocessing scripts under datasets to create their corresponding preprocessed versions. Use --output to indicate where you want to save the results to.

The data path to all datasets is specified as a constant under PATH in constants.py. Each dataset is stored in this path under its corresponding folder. Make sure to update this path, before running the code.

Experiments

To reproduce the experiments, simply use the scripts that are provided in the root directory of the project under scripts. These scripts would use slurm to submit the jobs to the cluster. You need to specify the node name and the name of the job to run the script. Refer to scripts/submit_job_forecasting.sh for farther details on how to choose dataset, run_name, and loader to run forecasting experiments. The scripts run the pipeline with 5 different seeds so each script submits 5 jobs to the cluster.

# UCR dataset:
bash scripts/submit_job_ucr.sh <node_name> <job_name>

# UEA dataset
bash scripts/submit_job_uea.sh <node_name> <job_name>

# KPI dataset:
bash scripts/submit_job_kpi.sh <node_name> <job_name>

# ETT and electricity dataset
bash scripts/submit_job_forecasting.sh <node_name> <job_name> <dataset> <run_name> <loader>

# Example:
bash scripts/submit_job_forecasting.sh sample_node sample_job ETTh2 forecast_univar forecast_csv_univar

Hyper-parameters

The following table shows the set of hyper-parameters used in all of our experiments:

Temperature Queue Size Alpha Dropout Learning Rate Momentum
0.07 128 0.5 0.1 0.001 0.999

About

Self Supervised Learning for Time Series Using Similarity Distillation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published