Skip to content

Commit

Permalink
rendering md version draft #1
Browse files Browse the repository at this point in the history
  • Loading branch information
Madhur prashant authored and Madhur prashant committed Apr 2, 2024
1 parent 4e56047 commit 925776f
Showing 1 changed file with 10 additions and 5 deletions.
15 changes: 10 additions & 5 deletions blog_post.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# How Twilio used Amazon SageMaker MLOps Pipelines with PrestoDB to enable frequent model re-training, optimized batch processing, and detect burner phone numbers
# How Twilio used Amazon SageMaker MLOps Pipelines with PrestoDB to
enable frequent model re-training, optimized batch processing, and
detect burner phone numbers


*Amit Arora*, *Madhur Prashant*, *Antara Raisa*, *Johnny Chivers*
Expand Down Expand Up @@ -1078,7 +1080,8 @@ to walk through the solution:
## Results
Here is a compilation of some queries and responses generated by our
implementation from the real time endpoint deployment stage:
implementation from the real time endpoint deployment stage: \[ to add
results here, querying data, fetching it, making predictions etc\]
<table style="width:50%;">
<caption>mlops-pipeline-prestodb results</caption>
Expand All @@ -1104,16 +1107,18 @@ mlops-pipeline-prestodb results
## Cleanup
\> need to perform and add a clean up section
\> need to perform and add a clean up section \[we can add the CFT
clean up section here for the ec2 instance with the presto server
running on it\]
## Conclusion
With the rise of generative AI, the use of training, deploying and
running machine learning models exponentially increases, and so does the
use of data. With an integration of SageMaker Processing Jobs with
PrestoDB, customers can easily and seamlessly migrate their workloads to
SageMaker pipelines without any burden of additional data preparation
and storage. Customers can now build, train, evaluate, run batch
SageMaker pipelines without any burden of additional data preparation,
storage, and access. Customers can now build, train, evaluate, run batch
inferences and deploy their models as real time endpoints while taking
advantage of their existing data engineering pipelines with minimal-no
code changes.
Expand Down

0 comments on commit 925776f

Please sign in to comment.