Skip to content

Commit

Permalink
rendering md version draft #1
Browse files Browse the repository at this point in the history
  • Loading branch information
Madhur prashant authored and Madhur prashant committed Apr 2, 2024
1 parent ab7cc14 commit 4e56047
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions blog_post.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -734,7 +734,7 @@ Once the prerequisites and set up is complete and the config.yml file is set up
## Results
Here is a compilation of some queries and responses generated by our implementation from the real time endpoint deployment stage:
Here is a compilation of some queries and responses generated by our implementation from the real time endpoint deployment stage: [ to add results here, querying data, fetching it, making predictions etc]
|Query | Answer |
|------------------------|-----------|
Expand All @@ -744,14 +744,14 @@ Here is a compilation of some queries and responses generated by our implementat
## Cleanup
--> need to perform and add a clean up section
--> need to perform and add a clean up section [we can add the CFT clean up section here for the ec2 instance with the presto server running on it]
## Conclusion
With the rise of generative AI, the use of training, deploying and running machine learning models exponentially increases, and so does the use of data. With an integration of SageMaker Processing Jobs with PrestoDB, customers can easily and seamlessly migrate their workloads to SageMaker pipelines without any burden of additional data preparation and storage. Customers can now build, train, evaluate, run batch inferences and deploy their models as real time endpoints while taking advantage of their existing data engineering pipelines with minimal-no code changes.
With the rise of generative AI, the use of training, deploying and running machine learning models exponentially increases, and so does the use of data. With an integration of SageMaker Processing Jobs with PrestoDB, customers can easily and seamlessly migrate their workloads to SageMaker pipelines without any burden of additional data preparation, storage, and access. Customers can now build, train, evaluate, run batch inferences and deploy their models as real time endpoints while taking advantage of their existing data engineering pipelines with minimal-no code changes.
We encourage you to learn more by exploring SageMaker Pipeline, Open source data querying engines like PrestoDB and building a solution using the sample implementation provided in this post.
We encourage you to learn more by exploring SageMaker Pipeline, Open source data querying engines like PrestoDB and building a solution using the sample implementation provided in this post.
Portions of this code are released under the Apache 2.0 License as referenced here: https://aws.amazon.com/apache-2-0/
Expand Down

0 comments on commit 4e56047

Please sign in to comment.