Skip to content

Latest commit

 

History

History
14 lines (14 loc) · 1.65 KB

File metadata and controls

14 lines (14 loc) · 1.65 KB

End-to-End ML Model Deployment on Google Cloud Platform (GCP)

This project demonstrates a complete workflow for deploying a machine learning model on Google Cloud Platform. It covers the following steps:

  1. Model Development: Train and build your machine learning model using any framework (e.g., TensorFlow, PyTorch), I used Google Cloud Shell, a free browser-based command-line environment on GCP.
  2. Dockerization: Create a Dockerfile and containerize your model, including its dependencies and environment. This ensures consistent and portable deployment across different environments.
  3. Artifact Registry(AR): Push the built Docker image to Google Artifact Registry, a secure and managed repository for storing container images(Docker Images).
  4. Kubernetes Engine (GKE) Deployment: Deploy the containerized model as a service on Google Kubernetes Engine, a managed container orchestration platform. This allows for scalable and automated deployments.
  5. Frontend Integration: Create a basic frontend application (e.g., using Flask or Streamlit) to interact with the exposed endpoint of your deployed model.
This project provides a starting point for learning how to deploy machine learning models on GCP using industry-standard tools and practices. It showcases the benefits of:

  • Containerization: Enables consistent and portable deployments.
  • Artifact Registry: Provides a secure and centralized location for storing container images.
  • Kubernetes Engine: Offers an automated, scalable, and flexible platform for container orchestration.