LLM LangChain projects (Generative AI):
- LLMs - AutoPureData: Automated Filtering of Web Data for LLM Fine-tuning
- Research project. Used Llama 3 to automate the process of filtering web data for LLM fine-tuning
- LLMs - Chat with a Wikipedia page
- Used LangChain, RAG, and ChainLit (to host web page)
- LLMs - Feedback summarizer
- Used LangChain, Selenium, and Gradio (for hosting)
- LLMs - Chat with an image
- Used LangChain, and StreamLit (for hosting)
- LLMs - Prompt shortener
- Used LangChain and Gradio (for hosting)
- LLMs - Agents
- Used LangChain Agents for various tasks like searching online, fetching weather, math operations, running python code, etc.
- LLMs - Chat with Data
- Useful to chat with data to get useful insights to increase the profitability of the companies
- Generated synthetic data using LLMs
- LLMs - Model Deployment
- Used LMDeploy and FastAPI to deploy the model by emulating OpenAI API
For demo, please open the page: Demo
Note: Before you run these, please install Ollama and pull the model you prefer. Make sure to copy .env.example to .env and fill in the model name.
Data Science projects:
- Structured code and folders
- Common functions to reuse - common_functions.py
- LLMs - Made the LLM calls faster and cheaper for the profitability of the companies
- Applying more concepts that are useful in the real-world projects