Skip to content

Prescriptive MLOps scenarios for building, deploying and monitoring machine learning models with Azure Machine Learning.

License

Notifications You must be signed in to change notification settings

nfmoore/azureml-mlops-example-scenarios

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Example Scenarios: MLOps with Azure Machine Learning

📚 Overview

MLOps is a set of repeatable, automated, and collaborative workflows with best practices that empower teams of ML professionals to quickly and easily get their machine learning models deployed into production.

This repository provides prescriptive guidance when building, deploying, and monitoring machine learning models with Azure Machine Learning in line with MLOps principles and practices.

These example scenarios provided an end-to-end approach for MLOps in Azure based on common inference scenarios. The example scenarios will focus on Azure Machine Learning and GitHub Actions.

Note: the Azure MLOps (v2) Solution Accelerator is intended to serve as the starting point for MLOps implementation in Azure.

💻 Getting Started

This repository contains several example scenarios for productionising models using Azure Machine Learning. Two approaches are considered:

  • Standalone: where services consuming models are operated entirely within Azure Machine Learning.
  • Native integrations: where services consuming models within Azure Machine Learning are integrated within other services within Azure via out-of-the-box integrations.

Users of Azure Machine Learning might choose to integrate with other services available within Azure to better align with existing workflows, enable new inference scenarios, or gain greater flexibility.

All example scenarios will focus on classical machine learning problems. An adapted version of the UCI Credit Card Client Default dataset will be used to illustrate each example scenario.

Setup

Detailed instructions for deploying this proof-of-concept are outlined in the Step-by-Step Setup section of this repository. This proof-of-concept will illustrate how to:

  • Manage and version machine learning models, environments, and datasets within Azure Machine Learning.
  • Promote a machine learning model to downstream environments.
  • Deploy models to managed endpoints for batch and online inference scenarios.
  • Deploy a data factory pipeline to orchestrate workflows consuming a batch-managed endpoint.
  • Develop build and deployment workflows for the different inference scenarios.
  • Collect and process inference data to detect data drift.
  • Monitor workloads for usage, performance and data drift.

Standalone deployments within Azure Machine Learning

Example Scenario Inference Scenario Description
Batch Managed Endpoint Batch Consume a registered model as a batch managed endpoint within Azure Machine Learning for high-throughput scenarios that can be executed within a single Azure Machine Learning workspace.
Online Managed Endpoint Online Consume a registered model as an online managed endpoint within Azure Machine Learning for low-latency scenarios.

Native integrations between Azure services and deployments within Azure Machine Learning

Example Scenario Inference Scenario Description
Azure Data Factory / Synapse Pipeline Batch Consume a registered model as a batch managed endpoint within Azure Machine Learning for high-throughput scenarios orchestrated via Azure Data Factory or Azure Synapse Pipelines.
Power BI Online Consume a registered model deployed as an online managed endpoint within a Power BI report
Azure Stream Analytics Streaming Consume a registered model deployed as an online managed endpoint within an Azure Stream Analytics User Defined Function for processing high-volume data streams.

⚖️ License

Details on licensing for the project can be found in the LICENSE file.