Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust Safety.ipynb quickstart with safety filter details #125

Merged
merged 3 commits into from
May 13, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
153 changes: 63 additions & 90 deletions quickstarts/Safety.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
"source": [
lucianommartins marked this conversation as resolved.
Show resolved Hide resolved
lucianommartins marked this conversation as resolved.
Show resolved Hide resolved
lucianommartins marked this conversation as resolved.
Show resolved Hide resolved
"The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it.\n",
"\n",
"Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, we're focused on the code."
"Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code."
]
},
{
Expand All @@ -75,6 +75,17 @@
"!pip install -q -U google-generativeai # Install the Python SDK"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3VAUtJubX7MG"
},
"source": [
"## Import the Gemini python SDK\n",
"\n",
"Once the kernel is restarted, you can import the Gemini SDK:"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -116,7 +127,9 @@
"id": "LZfoK3I3hu6V"
},
"source": [
"## Prompt Feedback\n",
"## Send your prompt request to Gemini\n",
"\n",
"Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n",
"\n",
"The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)."
]
Expand All @@ -131,7 +144,7 @@
"source": [
"model = genai.GenerativeModel('gemini-1.0-pro')\n",
"\n",
"unsafe_prompt = # Put your unsafe prompt here\n",
"unsafe_prompt = \"Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark\"\n",
"response = model.generate_content(unsafe_prompt)"
]
},
Expand All @@ -141,11 +154,13 @@
"id": "WR_2A_sxk8sK"
},
"source": [
"This response object gives you safety feedback in two ways:\n",
"This response object gives you safety feedback about the candidate answers Gemini generates to you.\n",
"\n",
"* The `prompt_feedback.safety_ratings` attribute contains a list of safety ratings for the input prompt.\n",
"For each candidate answer you need to check `response.candidates.finish_reason`.\n",
"\n",
"* If your prompt is blocked, `prompt_feedback.block_reason` field will explain why."
"As you can find on the [Gemini API safety filters documentation](https://ai.google.dev/gemini-api/docs/safety-settings#safety-feedback):\n",
"- if the `candidate.finish_reason` is `FinishReason.STOP` means that your generation request ran successfully\n",
"- if the `candidate.finish_reason` is `FinishReason.SAFETY` means that your generation request was blocked by safety reasons. It also means that the `response.text` structure will be empty."
]
},
{
Expand All @@ -156,7 +171,16 @@
},
"outputs": [],
"source": [
"bool(response.prompt_feedback.block_reason)"
"print(response.candidates[0].finish_reason)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XBdqPso3kamW"
},
"source": [
"If the `finish_reason` is `FinishReason.SAFETY` you can check which filter caused the block checking the `safety_ratings` list for the candidate answer:"
]
},
{
Expand All @@ -167,27 +191,30 @@
},
"outputs": [],
"source": [
"response.prompt_feedback.safety_ratings"
"print(response.candidates[0].safety_ratings)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "72b4a8808bb9"
"id": "z9-SdzjbxWXT"
},
"source": [
"If the prompt is blocked because of the safety ratings, you will not get any candidates in the response:"
"As the request was blocked by the safety filters, the `response.text` field will be empty (as nothing as generated by the model):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "f20d9269325d"
"id": "L1Da4cJ3xej3"
},
"outputs": [],
"source": [
"response.candidates"
"try:\n",
" print(response.text)\n",
"except:\n",
" print(\"No information generated by the model.\")"
]
},
{
Expand All @@ -196,16 +223,13 @@
"id": "4672af98ac57"
},
"source": [
"### Safety settings"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2a6229f6d3a1"
},
"source": [
"Adjust the safety settings and the prompt is no longer blocked. The Gemini API has four configurable safety settings."
"## Customizing safety settings\n",
"\n",
"Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.\n",
"\n",
"To make this customization you must define a `safety_settings` dictionary as part of your `model.generate_content()` request. In the example below, all the filters are being set to do not block contents.\n",
"\n",
"**Important:** To guarantee the Google commitment with the Responsible AI development and its [AI Principles](https://ai.google/responsibility/principles/), for some prompts Gemini will avoid generating the results even if you set all the filters to none."
]
},
{
Expand All @@ -229,113 +253,64 @@
{
"cell_type": "markdown",
"metadata": {
"id": "86c560e0a641"
"id": "564K7R8rwWhs"
},
"source": [
"With the new settings, the `blocked_reason` is no longer set."
"Checking again the `candidate.finish_reason` information, if the request was not too unsafe, it must show now the value as `FinishReason.STOP` which means that the request was successfully processed by Gemini."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0c2847c49262"
"id": "LazB08GBpc1w"
},
"outputs": [],
"source": [
"bool(response.prompt_feedback.block_reason)"
"print(response.candidates[0].finish_reason)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "47298a4eef40"
},
"source": [
"And a candidate response is returned."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "028febe8df68"
},
"outputs": [],
"source": [
"len(response.candidates)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ujVlQoC43N3B"
},
"source": [
"You can check `response.text` for the response."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "de8ee74634af"
},
"outputs": [],
"source": [
"response.text"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3d401c247957"
},
"source": [
"### Candidate ratings"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3d306960dffb"
"id": "86c560e0a641"
},
"source": [
"For a prompt that is not blocked, the response object contains a list of `candidate` objects (just 1 for now). Each candidate includes a `finish_reason`:"
"Since the request was successfully generated, you can check the result on the `response.text`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "e49b53f69a2c"
"id": "0c2847c49262"
},
"outputs": [],
"source": [
"candidate = response.candidates[0]\n",
"candidate.finish_reason"
"try:\n",
" print(response.text)\n",
"except:\n",
" print(\"No information generated by the model.\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "badddf10089b"
"id": "47298a4eef40"
},
"source": [
"`FinishReason.STOP` means that the model finished its output normally.\n",
"\n",
"`FinishReason.SAFETY` means the candidate's `safety_ratings` exceeded the request's `safety_settings` threshold."
"And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2b60d9f96af0"
"id": "028febe8df68"
},
"outputs": [],
"source": [
"candidate.safety_ratings"
"print(response.candidates[0].safety_ratings)"
]
},
{
Expand All @@ -352,13 +327,11 @@
"\n",
"There are 4 configurable safety settings for the Gemini API:\n",
"* `HARM_CATEGORY_DANGEROUS`\n",
"*`HARM_CATEGORY_HARASSMENT`\n",
"* `HARM_CATEGORY_HARASSMENT`\n",
"* `HARM_CATEGORY_SEXUALLY_EXPLICIT`\n",
"* `HARM_CATEGORY_DANGEROUS`\n",
"\n",
"Note: while the API [reference](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) includes others, the remainder are for older models.\n",
"\n",
"* You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n",
"You can refer to the safety settings using either their full name, or the aliases like `DANGEROUS` used in the Python code above.\n",
"\n",
"Safety settings can be set in the [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) constructor.\n",
"\n",
Expand Down
Loading