-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llm guard #182
Conversation
Warning Review failedThe pull request is closed. WalkthroughThe new file Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant OpenAI_Guard
participant OpenAI_Client
participant Vault
participant Input_Scanner
participant Output_Scanner
User->>OpenAI_Guard: Provide prompt
OpenAI_Guard->>Input_Scanner: Scan prompt
Input_Scanner->>OpenAI_Guard: Return sanitized prompt, validation results, score
OpenAI_Guard->>OpenAI_Client: Request completion with sanitized prompt
OpenAI_Client->>OpenAI_Guard: Return response
OpenAI_Guard->>Output_Scanner: Scan response
Output_Scanner->>OpenAI_Guard: Return sanitized response, validation results, score
OpenAI_Guard->>User: Provide sanitized response
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
PR Reviewer Guide 🔍
|
PR Code Suggestions ✨
|
PR Type
Enhancement
Description
openai-guard.py
to demonstrate the use ofllm_guard
with the OpenAI API.OPENAI_API_KEY
environment variable.Anonymize
,Toxicity
,TokenLimit
,PromptInjection
) and output scanners (Deanonymize
,NoRefusal
,Relevance
,Sensitive
).Changes walkthrough 📝
openai-guard.py
Add OpenAI guard script with prompt and response validation
llmguard/openai-guard.py
llm_guard
with OpenAI API.OPENAI_API_KEY
.validation.
Summary by CodeRabbit