We compared LangChain, Fixie, and Marvin
-
Updated
Apr 24, 2023 - Python
We compared LangChain, Fixie, and Marvin
(Cine)Matic is a comprehensive demonstration of the NeMo Guardrails framework's capabilities. This project highlights how guardrails technology can effectively maintain the language model (LLM) focus, particularly in the context of Retrieval-Augmented Generation (RAG) for answering movie-related questions.
Using NVIDIA NeMo Guardrails with Amazon Bedrock via LangChain
The Modelmetry JS/TS SDK allows developers to easily integrate Modelmetry’s advanced guardrails and monitoring capabilities into their LLM-powered applications.
Using Guardrails with Malli, and the Guardrails registry from Malli
Demo showcase highlighting the capabilities of Guardrails in LLMs.
guardrails-ai validator that supports cucumber-expressions (instead of: regex)
Short tutorial on using NVIDIA NeMo Guardrails
💂🏼 Build your Documentation AI with Nemo Guardrails
Repository for the Azure Governance Best Practices: Ensuring Compliance with Policy-driven Guardrails blog post, to implement policy-driven guardrails using Terraform.
I ran an app with NVIDIA Guardrails to find out what it was.
The Modelmetry Python SDK allows developers to easily integrate Modelmetry’s advanced guardrails and monitoring capabilities into their LLM-powered applications.
E-commerce fashion assistant with Chatgpt, Hugging Face, Ltree and Pgvector.
Deploy Near Real Time Auto Remediation for your Google Cloud Organization and Projects
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT
This is a RAG based chatbot in which semantic cache and guardrails have been incorporated.
This repo hosts the Python SDK and related examples for AIMon, which is a proprietary, state-of-the-art system for detecting LLM quality issues such as Hallucinations. It can be used during offline evals, continuous monitoring or inline detection. We offer various model quality metrics that are fast, reliable and cost-effective.
LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.
Add a description, image, and links to the guardrails topic page so that developers can more easily learn about it.
To associate your repository with the guardrails topic, visit your repo's landing page and select "manage topics."