Skip to content

Latest commit

 

History

History
61 lines (42 loc) · 980 Bytes

README.md

File metadata and controls

61 lines (42 loc) · 980 Bytes

Nexgen Rules Chatbot

alt text

Steps to run the project

virtualenv venv 
source venv/bin/activate
pip install -r requirements.txt

Create a .env file in the root directory and add your Pinecone credentials as follows:

PINECONE_API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
PINECONE_API_ENV = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

Download the quantize model from the link provided in model folder & keep the model in the model directory:

## Download the Llama 2 Model:

llama-2-7b-chat.ggmlv3.q4_0.bin


## From the following link:
https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/tree/main
# run the following command
python store_index.py
# Finally run the following command
python app.py

Now,

open up localhost:

Techstack Used:

  • Python
  • LangChain
  • Flask
  • Meta Llama2
  • Pinecone