A minimal CLI tool to locally summarize any text using LLM!
-
Updated
Oct 17, 2024 - Python
A minimal CLI tool to locally summarize any text using LLM!
Run gguf LLM models in Latest Version TextGen-webui
This repository contains a Python-based tool for summarizing web content using the Ollama API. It scrapes articles from URLs, cleans and processes the HTML content, and generates summaries using a pre-trained language model. The repository also includes a rich-based logging utility for improved console output.
Alacritty + Fish + Zellij + Starship + Neovim + i3 + Supermaven + Ollama 🦙 = 🚀
Local AI Open Orca For Dummies is a user-friendly guide to running Large Language Models locally. Simplify your AI journey with easy-to-follow instructions and minimal setup. Perfect for developers tired of complex processes!
ScrAIbe Assistant is designed to leverage Whisper for precise audio processing and local LLMs via Ollama for efficient summarization. This tool is perfect for tasks such as taking notes from team meetings or lectures, offering a secure environment where no data—be it text, audio, or otherwise—leaves your local machine.
A clone of InfiniteCraft (AI!!! LLMs!!) you can run on a laptop _without_ a good GPU!!
This projects build a local retrieval augmented generation (pipeline) from scratch, connects it to a local llm, and is deployed as a chatbot via Gradio.
A local chatbot for managing docs
Chat with your pdf using your local LLM, OLLAMA client.
This is a basic workflow with CrewAI agents working with sales transactions to draw business insights and marketing recommendations. The agents will work on everything from the execution plan to the business insights report. It works with local LLM via Ollama (I'm using llama3:8B but you can easily change it).
Add a description, image, and links to the localllm topic page so that developers can more easily learn about it.
To associate your repository with the localllm topic, visit your repo's landing page and select "manage topics."