diff --git a/blog_post.qmd b/blog_post.qmd index a1852aa..79e02bc 100644 --- a/blog_post.qmd +++ b/blog_post.qmd @@ -734,7 +734,7 @@ Once the prerequisites and set up is complete and the config.yml file is set up ## Results -Here is a compilation of some queries and responses generated by our implementation from the real time endpoint deployment stage: +Here is a compilation of some queries and responses generated by our implementation from the real time endpoint deployment stage: [ to add results here, querying data, fetching it, making predictions etc] |Query | Answer | |------------------------|-----------| @@ -744,14 +744,14 @@ Here is a compilation of some queries and responses generated by our implementat ## Cleanup ---> need to perform and add a clean up section +--> need to perform and add a clean up section [we can add the CFT clean up section here for the ec2 instance with the presto server running on it] ## Conclusion -With the rise of generative AI, the use of training, deploying and running machine learning models exponentially increases, and so does the use of data. With an integration of SageMaker Processing Jobs with PrestoDB, customers can easily and seamlessly migrate their workloads to SageMaker pipelines without any burden of additional data preparation and storage. Customers can now build, train, evaluate, run batch inferences and deploy their models as real time endpoints while taking advantage of their existing data engineering pipelines with minimal-no code changes. +With the rise of generative AI, the use of training, deploying and running machine learning models exponentially increases, and so does the use of data. With an integration of SageMaker Processing Jobs with PrestoDB, customers can easily and seamlessly migrate their workloads to SageMaker pipelines without any burden of additional data preparation, storage, and access. Customers can now build, train, evaluate, run batch inferences and deploy their models as real time endpoints while taking advantage of their existing data engineering pipelines with minimal-no code changes. -We encourage you to learn more by exploring SageMaker Pipeline, Open source data querying engines like PrestoDB and building a solution using the sample implementation provided in this post. +We encourage you to learn more by exploring SageMaker Pipeline, Open source data querying engines like PrestoDB and building a solution using the sample implementation provided in this post. Portions of this code are released under the Apache 2.0 License as referenced here: https://aws.amazon.com/apache-2-0/