Chat with Files is an AI-powered QnA chatbot built on the Llama 70b model, designed to extract information and answer questions from multiple PDF documents. This application simplifies information retrieval from documents, enabling streamlined communication with large volumes of textual data.
Visit the live application: Chat with Files
- Perform QnA on multiple PDFs simultaneously.
- Intuitive user interface for easy interaction.
- Extracts information swiftly from uploaded documents.
- Provides accurate responses to inquiries based on PDF content.
To use this project locally:
-
Clone the repository:
git clone https://github.com/HemantModi11/ChatwithDocs.git cd ChatwithDocs
-
Create a virtual environment:
python3 -m venv venv
-
Activate the virtual environment:
- On Windows:
venv\Scripts\activate
- On macOS and Linux:
source venv/bin/activate
- On Windows:
-
Install dependencies:
pip install -r requirements.txt
-
Run the application:
streamlit run app.py
-
Access the application:
- Open your web browser and go to http://localhost:8501 to interact with the chatbot.
- Upload PDFs and wait for a few moments for document processing.
- The QnA section will become available once the PDFs have been processed.
NOTE: Please assign your replicate token key in the .env file, you can get your key from Replicate for free.
Contributions are welcome! Please create a pull request with proposed changes.
Found a bug or have suggestions? Please open an issue here.
This project is licensed under the MIT License.