Skip to content

Enhances prompts to large language models by using semantic search to add relevant context from a customizable knowledge base.

Notifications You must be signed in to change notification settings

callumcurtis/llm-retrieval-stack

Repository files navigation

LLM Retrieval Stack: Prompt Augmentation for Large Language Models (LLMs) using Semantic Search

Description

This project aims to develop a cloud-based system to embed and store arbitrary user data, facilitating semantic search and retrieval of personal or organizational knowledge for input to GPT-4 and other language models.

Systems like this one are useful for improving the relevance and quality of large language model outputs, as they supply the model with additional context from a knowledge base of the user's choosing - allowing the model's knowledge to be supplemented on the fly. For more information, see the Retrieval-Augmented Generation (RAG) paper.

Advantages over Existing Solutions

ChatGPT Retrieval Plugin

The ChatGPT Retrieval Plugin by OpenAI serves the same purpose: "it allows users to obtain the most relevant document snippets from their data sources, such as files, notes, or emails, by asking questions or expressing needs in natural language. Enterprises can make their internal documents available to their employees through ChatGPT using this plugin."

Advantages of this project:

  • Serverless: this project is deployed serverlessly instead of as a container
    • Improved scalability
    • Cost benefits depend on use case and workload
  • Stream processing: this project processes input as a stream
    • Constant amount of memory required, regardless of input size
    • API requests for chunks in the stream can proceed asynchronously during stream processing, rather than after the entire input is processed
  • Data parallelism: this project splits input into parts (e.g., 1 MB in size) and allocates these parts across an arbitrary number of cloud function instances instead of processing the entire input sequentially
    • More efficient processing of large inputs
  • Coupling to OpenAI services: this project is designed to be extensible to support arbitrary clients, LLMs, and text embedding services instead of only OpenAI services (i.e., ChatGPT web interface and OpenAI embedding models)
  • Storage of original data: this project uses an object storage service (e.g., AWS S3) as a single source of truth for the original data instead of storing it alongside the embeddings in the vector store (e.g., Pinecone, Weaviate)
    • Improved data durability
    • Larger supported uploads
    • Lower costs

Architecture

Project Structure

Getting Started

Prerequisites:

  • SAM CLI
  • AWS account
  • OpenAI API key
  • Pinecone API key

To get started with this project, follow the steps below:

  1. Clone the repository
  2. Run sam deploy --guided in deploy/aws and follow the instructions

Milestone Progress

✅ Milestone 1: Scalable Processing and Storage for Text Embeddings

Goal: Develop a cloud-based system to embed and store arbitrary user data, facilitating semantic search and retrieval of personal or organizational knowledge for input to GPT-4 and other language models.

⬜ Milestone 2: Semantic Search and Prompt Augmentation

Goal: Develop a cloud-based system to perform semantic search and prompt augmentation for large language models (LLMs) using the system developed in Milestone 1.

⬜ Milestone 3: User Interface

Goal: Develop a demo web application to showcase the system developed in previous milestones.

About

Enhances prompts to large language models by using semantic search to add relevant context from a customizable knowledge base.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published