Skip to content

AI-Maker-Space/RLXF-Community-Sessions

Repository files navigation

RLXF-Community-Session

Dataset

reddit_fine-tuning

reddit_comparison_dataset

How the dataset will be used:

  • train (92k) - for PPO
  • test(83.6k) - for testing
  • valid1(33k) - for fine tuning the base model
  • valid2(50k) - for the rewards model

Model

Reference (supervised-fine-tuned) model:

We will be using the T5-base model (220M params): Being an encoder-decoder, seems a better option for summarization. Check the notebook on how we applied SFT on this model.

Here are the trained weights:

Policy (target) model

The target model we will be the same as the Base (fine-tuned): T5-base model (220M params): Being an encoder-decoder, seems a better option for summarization.

Rewards model

The rewards model: We will be using Bert, as an encoder is more appropriate to produce a reward or a penalty based on the input.

Weights

Weights should be downloaded to your local computer from this link and once there they can be used from the notebook. The notebook has a couple of lines to load the model:

model = AutoModelForSequenceClassification.from_pretrained("./model_bert_hf_experiment2/")

tokenizer = AutoTokenizer.from_pretrained("./model_bert_hf_experiment2/")

PPO

notebook

References

TRL_HuggingFace

illustrated_RLHF

Presentation stack

canva_RLHF

canva_RLAIF

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published