Skip to content

Commit

Permalink
🤖 feat(google): Add safety settings configuration (#2644)
Browse files Browse the repository at this point in the history
* 🤖 feat(google): Add safety settings configuration

- Implement safety settings configuration in GoogleClient.js
- Add safety settings variables in .env.example
- Update documentation to explain safety settings and clarify model usage

* fix(google): Apply safety settings only to Gemini models

Previously, the safety settings were being applied to all models, regardless of whether they were Gemini models or not. This commit ensures that the safety settings are only applied to models that contain the "gemini" string in their name.

The changes include:

- Extracting the model name from `payload.parameters.model`
- Checking if the model name exists and contains the "gemini" string
- Only applying the safety settings if the model name contains "gemini"
- Ignoring the safety settings for non-Gemini models

This fix ensures that the safety settings are only used for the intended Gemini models, and not applied to other models where they may not be applicable.

* Update GoogleClient.js

* fix(google): Apply safety settings only to Gemini models

---------

Co-authored-by: Oliver Faust <oliver@f4ust.de>
  • Loading branch information
danny-avila and lidonius1122 authored May 9, 2024
1 parent b6d1f5f commit 5293b73
Show file tree
Hide file tree
Showing 3 changed files with 85 additions and 5 deletions.
14 changes: 14 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,20 @@ GOOGLE_KEY=user_provided
# Vertex AI
# GOOGLE_MODELS=gemini-1.5-pro-preview-0409,gemini-1.0-pro-vision-001,gemini-pro,gemini-pro-vision,chat-bison,chat-bison-32k,codechat-bison,codechat-bison-32k,text-bison,text-bison-32k,text-unicorn,code-gecko,code-bison,code-bison-32k

# Google Gemini Safety Settings
# NOTE (Vertex AI): You do not have access to the BLOCK_NONE setting by default.
# To use this restricted HarmBlockThreshold setting, you will need to either:
#
# (a) Get access through an allowlist via your Google account team
# (b) Switch your account type to monthly invoiced billing following this instruction:
# https://cloud.google.com/billing/docs/how-to/invoiced-billing
#
# GOOGLE_SAFETY_SEXUALLY_EXPLICIT=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_HATE_SPEECH=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_HARASSMENT=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_DANGEROUS_CONTENT=BLOCK_ONLY_HIGH


#============#
# OpenAI #
#============#
Expand Down
32 changes: 32 additions & 0 deletions api/app/clients/GoogleClient.js
Original file line number Diff line number Diff line change
Expand Up @@ -677,6 +677,9 @@ class GoogleClient extends BaseClient {
};
}

const safetySettings = _payload.safetySettings;
requestOptions.safetySettings = safetySettings;

const result = await client.generateContentStream(requestOptions);
for await (const chunk of result.stream) {
const chunkText = chunk.text();
Expand All @@ -688,9 +691,11 @@ class GoogleClient extends BaseClient {
return reply;
}

const safetySettings = _payload.safetySettings;
const stream = await model.stream(messages, {
signal: abortController.signal,
timeout: 7000,
safetySettings: safetySettings,
});

for await (const chunk of stream) {
Expand Down Expand Up @@ -720,6 +725,33 @@ class GoogleClient extends BaseClient {
}

async sendCompletion(payload, opts = {}) {
const modelName = payload.parameters?.model;

if (modelName && modelName.toLowerCase().includes('gemini')) {
const safetySettings = [
{
category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
threshold:
process.env.GOOGLE_SAFETY_SEXUALLY_EXPLICIT || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
{
category: 'HARM_CATEGORY_HATE_SPEECH',
threshold: process.env.GOOGLE_SAFETY_HATE_SPEECH || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
{
category: 'HARM_CATEGORY_HARASSMENT',
threshold: process.env.GOOGLE_SAFETY_HARASSMENT || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
{
category: 'HARM_CATEGORY_DANGEROUS_CONTENT',
threshold:
process.env.GOOGLE_SAFETY_DANGEROUS_CONTENT || 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
},
];

payload.safetySettings = safetySettings;
}

let reply = '';
reply = await this.getCompletion(payload, opts);
return reply.trim();
Expand Down
44 changes: 39 additions & 5 deletions docs/install/configuration/dotenv.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,15 +296,49 @@ GOOGLE_KEY=user_provided
GOOGLE_REVERSE_PROXY=
```

- Customize the available models, separated by commas, **without spaces**.
- The first will be default.
- Leave it blank or commented out to use internal settings (default: all listed below).
Depending on whether you are using the Vertex AI or Gemini API, you can choose the corresponding set of models. Customize the available models, separated by commas, **without spaces**. The first model in the list will be used as the default. Leave the line blank or commented out to use the internal settings (default: all models listed below).

```bash
# Gemini API
# GOOGLE_MODELS=gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision

# Vertex AI
# GOOGLE_MODELS=gemini-1.5-pro-preview-0409,gemini-1.0-pro-vision-001,gemini-pro,gemini-pro-vision,chat-bison,chat-bison-32k,codechat-bison,codechat-bison-32k,text-bison,text-bison-32k,text-unicorn,code-gecko,code-bison,code-bison-32k
```

Both the Vertex AI and Gemini API provide safety settings that allow you to control the level of content filtering based on different categories. You can configure these settings using the following environment variables:

```bash
# all available models as of 12/16/23
GOOGLE_MODELS=gemini-pro,gemini-pro-vision,chat-bison,chat-bison-32k,codechat-bison,codechat-bison-32k,text-bison,text-bison-32k,text-unicorn,code-gecko,code-bison,code-bison-32k
# Google Safety Settings
# NOTE: You do not have access to the BLOCK_NONE setting by default.
# To use this restricted HarmBlockThreshold setting, you will need to either:
#
# (a) Get access through an allowlist via your Google account team
# (b) Switch your account type to monthly invoiced billing following this instruction:
# https://cloud.google.com/billing/docs/how-to/invoiced-billing
#
# GOOGLE_SAFETY_SEXUALLY_EXPLICIT=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_HATE_SPEECH=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_HARASSMENT=BLOCK_ONLY_HIGH
# GOOGLE_SAFETY_DANGEROUS_CONTENT=BLOCK_ONLY_HIGH
```

The available safety settings are:

- `GOOGLE_SAFETY_SEXUALLY_EXPLICIT`: Controls the filtering of sexually explicit content.
- `GOOGLE_SAFETY_HATE_SPEECH`: Controls the filtering of hate speech content.
- `GOOGLE_SAFETY_HARASSMENT`: Controls the filtering of harassment content.
- `GOOGLE_SAFETY_DANGEROUS_CONTENT`: Controls the filtering of dangerous content.

For each setting, you can choose one of the following values:

- `BLOCK_NONE`: Do not block any content in this category (requires additional access).
- `BLOCK_LOW_AND_ABOVE`: Block content with low or higher probability of belonging to this category.
- `BLOCK_MED_AND_ABOVE`: Block content with medium or higher probability of belonging to this category.
- `BLOCK_ONLY_HIGH`: Only block content with high probability of belonging to this category.

If you leave the safety settings commented out, the default values provided by the API will be used.

### OpenAI

- To get your OpenAI API key, you need to:
Expand Down

0 comments on commit 5293b73

Please sign in to comment.