diff --git a/articles/ai-services/openai/how-to/content-filters.md b/articles/ai-services/openai/how-to/content-filters.md index e9f56bd331..6d1cbd755d 100644 --- a/articles/ai-services/openai/how-to/content-filters.md +++ b/articles/ai-services/openai/how-to/content-filters.md @@ -38,12 +38,11 @@ You can configure the following filter categories in addition to the default har |Filter category |Status |Default setting |Applied to prompt or completion? |Description | |---------|---------|---------|---------|---| |Prompt Shields for direct attacks (jailbreak) |GA| On | User prompt | Filters / annotates user prompts that might present a Jailbreak Risk. For more information about annotations, visit [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter?tabs=python#annotations-preview). | -|Prompt Shields for indirect attacks | GA| On| User prompt | Filter / annotate Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Required: [Document ](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt)formatting. | +|Prompt Shields for indirect attacks | GA| Off | User prompt | Filter / annotate Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, a potential vulnerability where third parties place malicious instructions inside of documents that the generative AI system can access and process. Requires: [Document embedding and formatting](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt). | | Protected material - code |GA| On | Completion | Filters protected code or gets the example citation and license information in annotations for code snippets that match any public code sources, powered by GitHub Copilot. For more information about consuming annotations, see the [content filtering concepts guide](/azure/ai-services/openai/concepts/content-filter#annotations-preview) | | Protected material - text | GA| On | Completion | Identifies and blocks known text content from being displayed in the model output (for example, song lyrics, recipes, and selected web content). | -| Groundedness* | Preview |Off | Completion |Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials. | +| Groundedness* | Preview |Off | Completion |Detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Ungroundedness refers to instances where the LLMs produce information that is non-factual or inaccurate from what was present in the source materials. Requires: [Document embedding and formatting](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt).| -*Requires embedding documents in your prompt. [Read more](/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cuser-prompt%2Cpython-new#embedding-documents-in-your-prompt). ## Configure content filters with Azure AI Studio diff --git a/articles/ai-studio/how-to/develop/sdk-overview.md b/articles/ai-studio/how-to/develop/sdk-overview.md index a9ac9a71c6..e45d6bfef8 100644 --- a/articles/ai-studio/how-to/develop/sdk-overview.md +++ b/articles/ai-studio/how-to/develop/sdk-overview.md @@ -8,7 +8,7 @@ ms.custom: - build-2024 - ignite-2024 ms.topic: overview -ms.date: 11/19/2024 +ms.date: 11/25/2024 ms.reviewer: dantaylo ms.author: sgilley author: sdgilley @@ -181,7 +181,7 @@ The [Azure AI model inference service](/azure/ai-studio/ai-services/model-infere To use the model inference service, first ensure that your project has an AI Services connection (in the management center). -Install the `azure-ai-inferencing` client library: +Install the `azure-ai-inference` client library: ::: zone pivot="programming-language-python" @@ -237,7 +237,7 @@ To learn more about using the Azure AI inferencing client, check out the [Azure The inferencing client supports for creating prompt messages from templates. The template allows you to dynamically generate prompts using inputs that are available at runtime. -To use prompt templates, install the `azure-ai-inferencing` package: +To use prompt templates, install the `azure-ai-inference` package: ```bash pip install azure-ai-inference