-
(2021-04) Cross-task generalization via natural language crowdsourcing instructions. paper
-
(2021-04) Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections paper
-
(2021-04) Crossfit: A few-shot learning challenge for cross-task general- ization in NLP paper
-
(2021-09) Finetuned language models are zero-shot learners paper
FLAN
-
(2021-10) Multitask prompted training enables zero-shot task generalization paper
-
(2021-10) MetaICL: Learning to learn in context paper
-
(2022-03) Training language models to follow instructions with human feedback. paper
-
(2022-04) Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks paper
-
(20220-10) Scaling Instruction-Finetuned Language Models paper
Flan-T5/PaLM
- (2023-04) WizardLM: Empowering Large Language Models to Follow Complex Instructions paper
- Instruction-Tuning-Papers - A trend starts from
Natrural-Instruction
(ACL 2022),FLAN
(ICLR 2022) andT0
(ICLR 2022).