Easy-to-Follow RAG Pipeline Tutorial: Invoice Processing with ChromaDB & LangChain
Secure and Private: On-Premise Invoice Processing with LangChain and Ollama RAG
- Install the requirements:
pip install -r requirements.txt
-
Install Ollama and pull LLM model specified in config.yml
-
Copy text PDF files to the
data
folder. -
Run the script, to convert text to vector embeddings and save in Chroma vector storage:
python ingest.py
- Run the script, to process data with LLM RAG and return the answer:
python main.py "What is the invoice number value?"