TeleChat: 🤖️ an AI chat Telegram bot can Web Search Powered by GPT-3.5/4/4 Turbo/4o, DALL·E 3, Groq, Gemini 1.5 Pro/Flash and the official Claude2.1/3/3.5 API using Python on Zeabur, fly.io and Replit.
-
Updated
Sep 30, 2024 - Python
TeleChat: 🤖️ an AI chat Telegram bot can Web Search Powered by GPT-3.5/4/4 Turbo/4o, DALL·E 3, Groq, Gemini 1.5 Pro/Flash and the official Claude2.1/3/3.5 API using Python on Zeabur, fly.io and Replit.
中文Mixtral-8x7B(Chinese-Mixtral-8x7B)
Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
Fast Inference of MoE Models with CPU-GPU Orchestration
Build LLM-powered robots in your garage with MachinaScript For Robots!
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
An innovative Python project that integrates AI-driven agents for Agile software development, leveraging advanced language models and collaborative task automation.
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Reference implementation of Mistral AI 7B v0.1 model.
An unofficial C#/.NET SDK for accessing the Mistral AI API
Tool for test diferents large language models without code.
A project to show howto use SpringAI with OpenAI to chat with the documents in a library. Documents are stored in a normal/vector database. The AI is used to create embeddings from documents that are stored in the vector database. The vector database is used to query for the nearest document. That document is used by the AI to generate the answer.
Notes on the Mistral AI model
LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
A versatile CLI and Python wrapper for Groq AI's breakthrough LPU Inference Engine. Streamline the creation of chatbots and generate dynamic text with speeds of up to 800 tokens/sec.
Unofficial .NET SDK for the Mistral AI platform.
Chat with your PDF files for free, using Langchain, Groq, ChromaDB, and Jina AI embeddings.
Turn any Youtube video into a nice blogpost, using Groq and Deepgram.
Add a description, image, and links to the mixtral-8x7b topic page so that developers can more easily learn about it.
To associate your repository with the mixtral-8x7b topic, visit your repo's landing page and select "manage topics."