Skip to content

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

License

Notifications You must be signed in to change notification settings

Atul997/prompt-in-context-learning

 
 

Repository files navigation

Typing SVG

An Open-Source Engineering Guide for Prompt-in-context-learning from EgoAlpha Lab.

📝 Papers | ⚡️ Playground | 🛠 Prompt Engineering | 🌍 ChatGPT Prompt⛳ LLMs Usage Guide

version Awesome

⭐️ Shining ⭐️: This is fresh, daily-updated resources for in-context learning and prompt engineering. As Artificial General Intelligence (AGI) is approaching, let’s take action and become a super learner so as to position ourselves at the forefront of this exciting era and strive for personal and professional greatness.

The resources include:

🎉Papers🎉: The latest papers about in-context learning or prompt engineering.

🎉Playground🎉: Large language models that enable prompt experimentation.

🎉Prompt Engineering🎉: Prompt techniques for leveraging large language models.

🎉ChatGPT Prompt🎉: Prompt examples that can be applied in our work and daily lives.

🎉LLMs Usage Guide🎉: The method for quickly getting started with large language models by using LangChain.

In the future, there will likely be two types of people on Earth (perhaps even on Mars, but that's a question for Musk):

  • Those who enhance their abilities through the use of AI;
  • Those whose jobs are replaced by AI automation.

💎EgoAlpha: Hello! human👤, are you ready?

📢 News

☄️ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience it。

👉 Complete history news 👈

📜 Papers

You can directly click on the title to jump to the corresponding PDF link location


Survey

👉Complete paper list 🔗 for "Survey"👈

Prompt Engineering

Prompt Design

WizardLM: Empowering Large Language Models to Follow Complex Instructions2023.04.24

LLM+P: Empowering Large Language Models with Optimal Planning Proficiency2023.04.22

Progressive-Hint Prompting Improves Reasoning in Large Language Models2023.04.19

Boosted Prompt Ensembles for Large Language Models2023.04.12

Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition2023.04.10

REFINER: Reasoning Feedback on Intermediate Representations2023.04.04

Context-faithful Prompting for Large Language Models2023.03.20

Reflexion: an autonomous agent with dynamic memory and self-reflection2023.03.20

A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT2023.02.21

GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks2023.02.16

👉Complete paper list 🔗 for "Prompt Design"👈

Automatic Prompt

👉Complete paper list 🔗 for "Automatic Prompt"👈

Chain of Thought

Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting2023.05.07

Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models2023.05.06

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework2023.05.05

Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings2023.05.03

SCOTT: Self-Consistent Chain-of-Thought Distillation2023.05.03

Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models2023.04.23

Chain of Thought Prompt Tuning in Vision Language Models2023.04.16

Investigating Chain-of-thought with ChatGPT for Stance Detection on Social Media2023.04.06

Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data2023.02.24

Active Prompting with Chain-of-Thought for Large Language Models2023.02.23

👉Complete paper list 🔗 for "Chain of Thought"👈

Knowledge Augmented Prompt

LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model2023.04.13

Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation2023.02.02

REPLUG: Retrieval-Augmented Black-Box Language Models2023.01.30

Self-Instruct: Aligning Language Model with Self Generated Instructions2022.12.20

One Embedder, Any Task: Instruction-Finetuned Text Embeddings2022.12.19

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning2022.12.16

Don’t Prompt, Search! Mining-based Zero-Shot Learning with Language Models2022.10.26

Knowledge Prompting in Pre-trained Language Model for Natural Language Understanding2022.10.16

Knowledge Injected Prompt Based Fine-tuning for Multi-label Few-shot ICD Coding2022.10.07

Promptagator: Few-shot Dense Retrieval From 8 Examples2022.09.23

👉Complete paper list 🔗 for "Knowledge Augmented Prompt"👈

Evaluation & Reliability

AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models2023.04.13

GPTEval: NLG Evaluation using GPT-4 with Better Human Alignment2023.03.29

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks2023.03.01

Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints2023.02.17

Evaluating the Robustness of Discrete Prompts2023.02.11

Controlling for Stereotypes in Multimodal Language Model Evaluation2023.02.03

Large Language Models Can Be Easily Distracted by Irrelevant Context2023.01.31

Emergent Analogical Reasoning in Large Language Models2022.12.19

Discovering Language Model Behaviors with Model-Written Evaluations2022.12.19

Constitutional AI: Harmlessness from AI Feedback2022.12.15

👉Complete paper list 🔗 for "Evaluation & Reliability"👈

In-context Learning

Self-Refine: Iterative Refinement with Self-Feedback2023.03.30

Larger language models do in-context learning differently2023.03.07

Language Model Crossover: Variation through Few-Shot Prompting2023.02.23

How Does In-Context Learning Help Prompt Tuning?2023.02.22

PLACES: Prompting Language Models for Social Conversation Synthesis2023.02.07

Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning2023.01.27

Transformers as Algorithms: Generalization and Stability in In-context Learning2023.01.17

OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization2022.12.22

Prompt-Augmented Linear Probing: Scaling Beyond The Limit of Few-shot In-Context Learners2022.12.21

In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models2022.12.20

👉Complete paper list 🔗 for "In-context Learning"👈

Multimodal Prompt

MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers2023.05.12

Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models2023.05.08

Prompt What You Need: Enhancing Segmentation in Rainy Scenes with Anchor-based Prompting2023.05.06

Edit Everything: A Text-Guided Generative System for Images Editing2023.04.27

ChatVideo: A Tracklet-centric Multimodal and Versatile Video Understanding System2023.04.27

mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality2023.04.27

Promptify: Text-to-Image Generation through Interactive Prompt Exploration with Large Language Models2023.04.18

Towards Robust Prompts on Vision-Language Models2023.04.17

Visual Instruction Tuning2023.04.17

Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text2023.04.14

👉Complete paper list 🔗 for "Multimodal Prompt"👈

Prompt Application

Emergent and Predictable Memorization in Large Language Models2023.04.21

SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks2023.03.01

Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis2023.03.01

EvoPrompting: Language Models for Code-Level Neural Architecture Search2023.02.28

More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models2023.02.23

Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales2023.02.17

LabelPrompt: Effective Prompt-based Learning for Relation Classification2023.02.16

Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition2023.02.16

Prompting for Multimodal Hateful Meme Classification2023.02.08

Toxicity Detection with Generative Prompt-based Inference2022.05.24

👉Complete paper list 🔗 for "Prompt Application"👈

Foundation Models

X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages2023.05.07

Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision2023.05.04

AutoML-GPT: Automatic Machine Learning with GPT2023.05.04

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes2023.05.03

Unlimiformer: Long-Range Transformers with Unlimited Length Input2023.05.02

Transfer Visual Prompt Generator across LLMs2023.05.02

Improving Grounded Language Understanding in a Collaborative Environment by Interacting with Agents Through Help Feedback2023.04.21

Segment Anything Model for Medical Image Analysis: an Experimental Study2023.04.20

Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models2023.04.19

Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks2023.04.18

👉Complete paper list 🔗 for "Foundation Models"👈

👨‍💻 LLM Usage

Large language models (LLMs) are becoming a revolutionary technology that is shaping the development of our era. Developers can create applications that were previously only possible in our imaginations by building LLMs. However, using these LLMs often comes with certain technical barriers, and even at the introductory stage, people may be intimidated by cutting-edge technology: Do you have any questions like the following?

  • How can LLM be built using programming?
  • How can it be used and deployed in your own programs?

💡 If there was a tutorial that could be accessible to all audiences, not just computer science professionals, it would provide detailed and comprehensive guidance to quickly get started and operate in a short amount of time, ultimately achieving the goal of being able to use LLMs flexibly and creatively to build the programs they envision. And now, just for you: the most detailed and comprehensive Langchain beginner's guide, sourced from the official langchain website but with further adjustments to the content, accompanied by the most detailed and annotated code examples, teaching code lines by line and sentence by sentence to all audiences.

Click 👉here👈 to take a quick tour of getting started with LLM.

✉️ Contact

This repo is maintained by EgoAlpha Lab. Questions and discussions are welcome via helloegoalpha@gmail.com.

We are willing to engage in discussions with friends from the academic and industrial communities, and explore the latest developments in prompt engineering and in-context learning together.

🙏 Acknowledgements

Thanks to the PhD students from EgoAlpha Lab and other workers who participated in this repo. We will improve the project in the follow-up period and maintain this community well. We also would like to express our sincere gratitude to the authors of the relevant resources. Your efforts have broadened our horizons and enabled us to perceive a more wonderful world.

About

Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.2%
  • HTML 2.8%