Skip to content
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.

Step-by-Step tutorial that teaches you how to use Azure Safety Content - the prebuilt AI service that helps ensure that content sent to user is filtered to safeguard them from risky or undesirable outcomes

License

Notifications You must be signed in to change notification settings

Azure-Samples/rai-content-safety-workshop

Repository files navigation

UPDATE: This repository is archived effective July 1, 2024. For Responsible AI resources, visit aka.ms/rai-collection.

RAI: Content Safety Workshop

In this workshop you will learn how to use the prebuilt AI service, Azure Safety Content, in your applications to ensure that the texts or images that are sent to the user or the user enters do not contain data that has violence, self-harm, hate or sexual information.

This workshop is part of a series focused on building developer awareness and hands-on experience with Responsible AI principles and practices. See the Responsible AI Hub for details.

Learning Objectives

At the end of the workshop you will learn how to:

  • Detect and flag text that are unsuitable for end-users.
  • Block images that are inappropriate.
  • Create applications with a safe and friendly tone.

Prerequisites

The Content Safety API can be used in different programming languages. For this lab, we’ll be using Python.

Getting Started

The workshop consists of the following exercises:

  1. Create an instance of Azure Content Safety
  2. Launch Project GitHub codespaces
  3. Analyze Text
  4. Analyze Images

Ready to get started? Click here to go to the step-by-step tutorial.

About

Step-by-Step tutorial that teaches you how to use Azure Safety Content - the prebuilt AI service that helps ensure that content sent to user is filtered to safeguard them from risky or undesirable outcomes

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks