diff --git a/blog_post.md b/blog_post.md
index 7241097..2c48c79 100644
--- a/blog_post.md
+++ b/blog_post.md
@@ -1,4 +1,6 @@
-# How Twilio used Amazon SageMaker MLOps Pipelines with PrestoDB to enable frequent model re-training, optimized batch processing, and detect burner phone numbers
+# How Twilio used Amazon SageMaker MLOps Pipelines with PrestoDB to
+enable frequent model re-training, optimized batch processing, and
+detect burner phone numbers
*Amit Arora*, *Madhur Prashant*, *Antara Raisa*, *Johnny Chivers*
@@ -1078,7 +1080,8 @@ to walk through the solution:
## Results
Here is a compilation of some queries and responses generated by our
-implementation from the real time endpoint deployment stage:
+implementation from the real time endpoint deployment stage: \[ to add
+results here, querying data, fetching it, making predictions etc\]
mlops-pipeline-prestodb results
@@ -1104,7 +1107,9 @@ mlops-pipeline-prestodb results
## Cleanup
-–\> need to perform and add a clean up section
+–\> need to perform and add a clean up section \[we can add the CFT
+clean up section here for the ec2 instance with the presto server
+running on it\]
## Conclusion
@@ -1112,8 +1117,8 @@ With the rise of generative AI, the use of training, deploying and
running machine learning models exponentially increases, and so does the
use of data. With an integration of SageMaker Processing Jobs with
PrestoDB, customers can easily and seamlessly migrate their workloads to
-SageMaker pipelines without any burden of additional data preparation
-and storage. Customers can now build, train, evaluate, run batch
+SageMaker pipelines without any burden of additional data preparation,
+storage, and access. Customers can now build, train, evaluate, run batch
inferences and deploy their models as real time endpoints while taking
advantage of their existing data engineering pipelines with minimal-no
code changes.