UPDATE: This repository is archived effective July 1, 2024. For Responsible AI resources, visit aka.ms/rai-collection.
In this workshop you will learn how to use the prebuilt AI service, Azure Safety Content, in your applications to ensure that the texts or images that are sent to the user or the user enters do not contain data that has violence, self-harm, hate or sexual information.
This workshop is part of a series focused on building developer awareness and hands-on experience with Responsible AI principles and practices. See the Responsible AI Hub for details.
At the end of the workshop you will learn how to:
- Detect and flag text that are unsuitable for end-users.
- Block images that are inappropriate.
- Create applications with a safe and friendly tone.
The Content Safety API can be used in different programming languages. For this lab, we’ll be using Python.
- Basic knowledge of Python.
- Login or sign up for a Free Azure account.
- Install Visual Studio Code
The workshop consists of the following exercises:
- Create an instance of Azure Content Safety
- Launch Project GitHub codespaces
- Analyze Text
- Analyze Images
Ready to get started? Click here to go to the step-by-step tutorial.