Code to reproduce the experiments in the paper "Few-Shot Learning by Dimensionality Reduction in Gradient Space" (published at CoLLAs 2022).
Blog post: https://ml-jku.github.io/subgd
- Clone the repository
- Install the conda environment:
conda env create -f environments/environment_{cpu,gpu}.yml
- Activate the environment:
conda activate fs
The folder experiments/
contains the configuration files and notebooks to run the sinusoid, RLC, and hydrology experiments from the paper.
Each experimental section contains notebooks that walk through the experiments.
This code base can be used for few-shot and supervised learning experiments in various ways. The following instructions give some details on the capabilities of the library.
- Create a run configuration file
config.yml
, e.g., based on the configuration fromexample-config.yml
or one of the config files in theexperiments/
folders. - Train the model with
python tsfewshot/run.py train --config-file config.yml
. This creates a folderruns/dev_run_yymmdd_hhmmss
, where the trained model will be stored. Optionally, you can specify--gpu N
where N is the id of the GPU to be used. Negative values will use CPU. - Test the model with
python tsfewshot/run.py eval --run-dir runs/dev_run_yymmdd_hhmmss
. Optionally, you can specify the gpu with--gpu
, the split (train/val/test) with--split
, and the epoch with--epoch
(-1 means the epoch that had the best validation metric).
Use python tsfewshot/run_scheduler.py {train/eval/finetune} --directory dirname --gpu-ids 0 1 2 --runs-per-gpu 3
to train/evaluate/finetune on all configurations/directories inside of dirname
. This will start 3 up to parallel runs on each specified GPU (0, 1, and 2).
Optionally, you can filter configuration files/directories with --name-filter
.
To finetune a pretrained model on a full dataset (not just a support set), use python tsfewshot/run.py finetune --config-file finetuneconfig.yml
(again, optionally with --gpu
).
The provided config must have entries base_run_dir
(path to the pretraining run) and checkpoint_path
(path to the initial model).
All other config values provided in the finetuning config will overwrite values from the pretraining config.
Note: For technical reasons, this will finetune on the validation set and evaluate on the test set in the run configuration. I.e., you might want to overwrite val_datasets
and test_datasets
in the finetuning config.
This paper has been published in the proceedings of the Conference on Lifelong Learning Agents (CoLLAs) 2022.
@inproceedings{gauch22subgd,
title = {Few-Shot Learning by Dimensionality Reduction in Gradient Space},
author = {Gauch, Martin and Beck, Maximilian and Adler, Thomas and Kotsur, Dmytro and Fiel, Stefan and Eghbal-zadeh, Hamid and Brandstetter, Johannes and Kofler, Johannes and Holzleitner, Markus and Zellinger, Werner and Klotz, Daniel and Hochreiter, Sepp and Lehner, Sebastian},
booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents},
pages = {1043--1064},
year = {2022},
editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina},
volume = {199},
series = {Proceedings of Machine Learning Research},
month = {8},
publisher = {PMLR},
}
To run the documentation locally:
cd docs
make html
python -m http.server --directory _build/html
- access http://localhost:8000
To create the apidocs from scratch: sphinx-apidoc -o docs/api/ tsfewshot