Skip to content

Latest commit

 

History

History
28 lines (15 loc) · 802 Bytes

README.md

File metadata and controls

28 lines (15 loc) · 802 Bytes

Invoice data processing with Llama2 13B LLM RAG on Local CPU

Youtube: Invoice Data Processing with Llama2 13B LLM RAG on Local CPU


Quickstart

RAG runs on: LlamaCPP, Haystack, Weaviate

  1. Download the Llama2 13B model, check models/model_download.txt for the download link.
  2. Install Weaviate local DB with Docker

docker compose up -d

  1. Install the requirements:

pip install -r requirements.txt

  1. Copy text PDF files to the data folder.
  2. Run the script, to convert text to vector embeddings and save in Weaviate vector storage:

python ingest.py

  1. Run the script, to process data with Llama2 13B LLM RAG and return the answer:

python main.py "What is the invoice number value?"