Skip to content
forked from IBM/Dromedary

Dromedary is a helpful, ethical, reliable LLM.

License

Notifications You must be signed in to change notification settings

gogolxdong/Dromedary

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dromedary Logo

Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision

Code License Data License

Introduction

Dromedary is an open-source self-aligned language model trained with minimal human supervision. For comprehensive details and insights, we kindly direct you to our project page and paper.

Dromedary Pipeline

Setup

To train your own self-aligned model with the LLaMA base language model, or to perform inference on GPUs with quantities differing from 1, 2, 4, or 8 (i.e., any power of 2), you should install our customized llama_dromedary package.

In a conda env with pytorch / cuda available, run:

cd llama_dromedary
pip install -r requirements.txt
pip install -e .
cd ..

Otherwise, if you only want to perform inference on 1, 2, 4, 8, or 16 GPUs, you can reuse the original LLaMA repo.

git clone https://github.com/facebookresearch/llama.git
cd llama
pip install -r requirements.txt
pip install -e .
cd ..

In addition, you should at least install the packages required for inference:

cd inference
pip install -r requirements.txt

Model Weights

We release Dromedary weights as delta weights to comply with the LLaMA model license. You can add our delta to the original LLaMA weights to obtain the Dromedary weights. Instructions:

  1. Get the original LLaMA weights in the huggingface format by following the instructions here.
  2. Download the LoRA delta weights from our Hugging Face model hub.
  3. Follow our inference guide to see how to deploy Dromedary/LLaMA on your own machine with model parallel (which should be significantly faster than Huggingface's default pipeline parallel when using multiple GPUs).

Inference

We provide a chatbot demo for Dromedary.

Training

We provide the full training pipeline of Dromedary for reproduction.

Prompts

All the human annotations used in this project can be found here.

TODOs

  • Add the requirements.txt for the training pipeline.
  • Add the evaluation code for TruthfulQA and HHH Eval.
  • Release Dromedary delta weights at Hugging Face model hub.
  • Release the synthetic training data of Dromedary.
  • Add support for stream inference in the chatbot demo.
  • Fix the Huggingface datasets/accelerate bug of fine-tuning in distributed setting.

Citation

Please cite the following paper if you use the data or code in this repo.

@misc{sun2023principledriven,
      title={Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision},
      author={Zhiqing Sun and Yikang Shen and Qinhong Zhou and Hongxin Zhang and Zhenfang Chen and David Cox and Yiming Yang and Chuang Gan},
      year={2023},
      eprint={2305.03047},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Acknowledgements

We thank Yizhong Wang for providing the code for the parse analysis plot. We also thank Meta LLaMA team, Standford Alpaca team, Vicuna team, Alpaca-LoRA, and Huggingface PEFT for their open-source efforts in democratizing large language models.

About

Dromedary is a helpful, ethical, reliable LLM.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.0%
  • Shell 10.0%