Skip to content

Full-stack artificial intelligence project with both backend (Langchain, Langgraph) and frontend (C# Blazor) components

Notifications You must be signed in to change notification settings

rhuanbarros/llm-rag-agent-knowledgebase

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 

Repository files navigation

🌟 LLM RAG Agent Knowledgebase 🌟

Welcome to the LLM RAG Agent Knowledgebase!

This is a full-stack artificial intelligence project with both backend and frontend components. 🚀

📚 Project Overview

It was created an AI agent that can chat about document files ingested.

It's designed to handle file ingestion, manage vector stores, chat with users, and perform advanced search queries.

🛠️ Key technologies

  • 🌐 LangGraph: Agent orchestration
  • 🚀 FastAPI: Backend framework for handling API operations.
  • 📄 Unstructured Package: For file ingestion and OCR.
  • 🧠 Weaviate: Vector store
  • 🐳 Docker Compose: For running Weaviate in a container.
  • 🌐 C# Blazor: Frontend framework for building the web application webassembly.

🛠️ Backend: API Project

The backend of this project is powered by FastAPI, enabling efficient handling of AI agent operations. Key features include:

  • File Ingestion:

    • Unstructured Package: Processes diverse types of files, including OCR for files in Portuguese.
    • Chunk Splitting: Files are split into smaller chunks to enhance semantic similarity during RAG (Retrieval-Augmented Generation) queries.
  • Vector Store:

    • Weaviate: Utilized as the vector store, running separately via a Docker Compose file.
  • Agent Capabilities:

    • Chat: Engages in conversations and uses tools for specific tasks.
    • Tool Use: Currently supports vector store retrieval.
    • Memory: Saves conversation history using an SQLite database.
  • Search:

    • Endpoints: Implements standard search in documents with keyword, semantic, and hybrid search capabilities.

🌐 Frontend: Web Project

The frontend is built using C# Blazor, providing a seamless and interactive user experience.

  • WebAssembly Project:
    • Runs entirely in the browser.
    • Fetches data from the backend API project.

🚀 Getting Started

To get started with the LLM RAG Agent Knowledgebase, follow these steps:

  1. Clone the Repository:

    git clone https://github.com/rhuanbarros/llm-rag-agent-knowledgebase.git
  2. Vector store Setup:

    • Run the Weaviate server
    cd weaviate
    docker compose up
  3. Backend Setup:

    • Open the apidirectory in the devcontainer using Vscode
    • Install Unstructured dependecies
     sudo apt update
     sudo apt install libmagic-dev -y
     sudo apt install poppler-utils -y
     sudo apt install tesseract-ocr -y
     sudo apt install tesseract-ocr-por -y  #portuguese language support
     sudo apt install libreoffice -y
    • Install dev dependencies
     sudo apt-get install graphviz graphviz-dev
    • Navigate to the api directory and install python dependencies.
     cd app
     poetry install
  4. Configure the OpenAI api key:

    • Create a .env in the directory ./api/app with the following content:
    OPENAI_API_KEY=""
    
  5. Run the backend:

     cd app
     source .venv/bin/activate
     fastapi dev procurador/main.py
  6. Ingest documents:

  7. Run the frontend:

    • Open the webdirectory in the devcontainer using Vscode
     dotnet run
  8. Enjoy:

    • Acces the webpage and chat with the llm

About

Full-stack artificial intelligence project with both backend (Langchain, Langgraph) and frontend (C# Blazor) components

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published