This repository contains the source code for our NeurIPS 2024 paper Aligner: Efficient Alignment by Learning to Correct.
Jiaming Ji*, Boyuan Chen*, Hantao Lou, Donghai Hong, Borong Zhang, Xuehai Pan, Juntao Dai, Tianyi Qiu and Yaodong Yang
Work done by PKU-Alignment Team
With the rapid development of large language models (LLMs) and ever-evolving practical requirements, finding an efficient and effective alignment method has never been more critical. However, the tension between the complexity of current alignment methods and the need for rapid iteration in deployment scenarios necessitates the development of a model-agnostic alignment approach that can operate under these constraints. In this paper, we introduce Aligner, a novel and simple alignment paradigm that learns the correctional residuals between preferred and dispreferred answers using a small model. Designed as a model-agnostic, plug-and-play module, Aligner can be directly applied to various open-source and API-based models with only one-off training, making it suitable for rapid iteration. Notably, Aligner can be applied to any powerful, large-scale upstream models. Moreover, it can even iteratively bootstrap the upstream models using corrected responses as synthetic human preference data, breaking through the model's performance ceiling. Our experiments demonstrate performance improvements by deploying the same Aligner model across 11 different LLMs, evaluated on the 3H dimensions (helpfulness, harmlessness, and honesty). Specifically, Aligner-7B has achieved an average improvement of 68.9% in helpfulness and 23.8% in harmlessness across the tested LLMs while also effectively reducing hallucination. In the Alpaca-Eval leaderboard, stacking Aligner-2B on GPT-4 Turbo improved its LC Win Rate from 55.0% to 58.3%, surpassing GPT-4 Omni's 57.5% Win Rate (community report).
See our website for more details : https://pku-aligner.github.io/
- Aligner: Efficient Alignment by Learning to Correct
- Installation
- Training
- Dataset & Models
- Acknowledgment
As a plug-and-play module Aligner stack upon an upstream LLM. The Aligner redistributes initial answers from the upstream model into more helpful and harmless answers, thus aligning the composed LLM responses with human intentions.
Like a residual block that adds modifications via a shortcut without altering the base structure, the Aligner employs a copy and correct method to improve the original answer. This analogy highlights the Aligner's dual role in preserving the parameter of the upstream model while enhancing it to align with desired outcomes.
It is shown that Aligner achieves significant performances in all the settings. All assessments in this table were conducted based on integrating various models with Aligners to compare with the original models to quantify the percentage increase in the 3H standard. When integrated and assessed in conjunction with various upstream models, the Aligner requires only a single training session (i.e., the Aligner can operate in a zero-shot manner and enhance the performance of all upstream models.)
For more details, please refer to our website
Clone the source code from GitHub:
git clone https://github.com/cby-pku/aligner.git
cd aligner
Native Runner: Setup a conda environment using conda
/ mamba
:
conda env create --file conda-recipe.yaml # or `mamba env create --file conda-recipe.yaml`
aligner
supports a complete pipeline for Aligner residual correction training.
- Follow the instructions in section Installation to setup the training environment properly.
conda activate aligner
export WANDB_API_KEY="..." # your W&B API key here
- Supervised Fine-Tuning (SFT)
bash scripts/sft-correction.sh \
--train_datasets <your-correction-dataset> \
--model_name_or_path <your-model-name-or-checkpoint-path> \
--output_dir output/sft
NOTE:
- You may need to update some of the parameters in the script according to your machine setup, such as the number of GPUs for training, the training batch size, etc.
- Your dataset format should be consistent with aligner/template-dataset.json
- For the reproduction of more alignment training methods such as DPO or RLHF, please refer to the Align-Anything or Safe-RLHF repository.
You can register a new dataset by following the instructions in the aligner/training/datasets/raw/correction.py
file.
And you can also design your own user prompt to develop for more specifc Aligners, such as Instruct-Aligner.
Notice that the whole system prompt is start with BEGINNING OF CONVERSATION:
, you can refer to aligner/training/configs/constants.py
for details.
We have open-sourced a 20K training dataset and a 7B Aligner model. Further dataset and models will come soon.
Please cite our work if you find it useful and meaningful.
@inproceedings{ji2024aligner,
title={Aligner: Efficient Alignment by Learning to Correct},
author={Jiaming Ji and Boyuan Chen and Hantao Lou and Donghai Hong and Borong Zhang and Xuehai Pan and Tianyi Qiu and Juntao Dai and Yaodong Yang},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=kq166jACVP}
}
This repository benefits from LLaMA, Stanford Alpaca, DeepSpeed, DeepSpeed-Chat and Safe-RLHF.
Thanks for their wonderful works and their efforts to further promote LLM research. Aligner and its related assets are built and open-sourced with love and respect ❤️.
This work is supported and funded by the Peking University.
Aligner is released under Apache License 2.0.