This repository contains experiments for the block-based SPA, Structured Pruning Low-rank Adapter (SPLoRA), proposed in Structured Pruning Adapters using various structured pruning methods, for weight-based transfer learning of BERT on the SQuAD question-answering dataset.
- Clone this repository and enter it:
git clone https://github.com/LukasHedegaard/block-spa-experiments.git cd block-spa-experiments
- (Optionally) create conda environment:
conda create --name block-spa-experiments python=3.10
- Install as editable module
pip install -e .[dev]
The experiments are distributed over two branches:
splopa
/master
: Run the Structured Pruning Low-rank PHM Adapter (SPLoPA).fine-pruning
: Run fine-pruning baselines as found in the block-movement-pruning repository.
Select the branch of choice prior to running the experiment.
For reproducibility purposes, we use a detailed excel sheet (hparams/hyperparameters.xslx) to specify hyper-parameters.
To select and spawn a run, please follow these steps:
- Open hparams/hyperparameters.xslx.
- Note the "EXP ID" (col C) and "Effective encoder remain weights %" (col D) for the chosen row (this constitutes one run).
- Modify scripts/run_squad_from_excel_sheet.py accordingly. Note that if multiple runs are specified, they will be spawned in each their own process.
- Run
python scripts/run_squad_from_excel_sheet.py
.