Skip to content

Commit

Permalink
Merge branch 'development' of github.com:v3io/tutorials
Browse files Browse the repository at this point in the history
  • Loading branch information
Sharon-iguazio committed Jan 2, 2020
2 parents 9d1ced4 + 76bfedd commit abe1a92
Show file tree
Hide file tree
Showing 9 changed files with 1,916 additions and 306 deletions.
16 changes: 7 additions & 9 deletions demos/README.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -38,17 +38,15 @@
"<a id=\"image-classification-demo\"></a>\n",
"## Image Classification\n",
"\n",
"The [**image-classification**](image-classification/01-image-classification.ipynb) demo demonstrates image recognition: the application builds and trains an ML model that identifies (recognizes) and classifies images.\n",
"The [**image-classification**](image-classification/01-image-classification.ipynb) demo demonstrates an end-to-end solution for image recognition: the application uses TensorFlow, Keras, Horovod, and Nuclio to build and train an ML model that identifies (recognizes) and classifies images. \n",
"The application consists of four MLRun and Nuclio functions for performing the following operations:\n",
"\n",
"This example is using TensorFlow, Horovod, and Nuclio demonstrating end to end solution for image classification, \n",
"it consists of 4 MLRun and Nuclio functions:\n",
"1. Import an image archive from from an Amazon Simple Storage (S3) bucket to the platform's data store.\n",
"2. Tag the images based on their name structure.\n",
"3. Train the image-classification ML model by using [TensorFlow](https://www.tensorflow.org/) and [Keras](https://keras.io/); use [Horovod](https://eng.uber.com/horovod/) to perform distributed training over either GPUs or CPUs.\n",
"4. Automatically deploy a Nuclio model-serving function from [Jupyter Notebook](nuclio-serving-tf-images.ipynb) or from a [Dockerfile](./inference-docker).\n",
"\n",
"1. import an image archive from S3 to the cluster file system\n",
"2. Tag the images based on their name structure \n",
"3. Distrubuted training using TF, Keras and Horovod\n",
"4. Automated deployment of Nuclio model serving function (form [Notebook](nuclio-serving-tf-images.ipynb) and from [Dockerfile](./inference-docker))\n",
"\n",
"The Example also demonstrate an [automated pipeline](mlrun_mpijob_pipe.ipynb) using MLRun and KubeFlow pipelines "
"This demo also provides an example of an [automated pipeline](image-classification/02-create_pipeline.ipynb) using [MLRun](https://github.com/mlrun/mlrun) and [Kubeflow pipelines](https://github.com/kubeflow/pipelines)."
]
},
{
Expand Down
16 changes: 7 additions & 9 deletions demos/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,15 @@ The **demos** tutorials directory contains full end-to-end use-case applications
<a id="image-classification-demo"></a>
## Image Classification

The [**image-classification**](image-classification/01-image-classification.ipynb) demo demonstrates image recognition: the application builds and trains an ML model that identifies (recognizes) and classifies images.
The [**image-classification**](image-classification/01-image-classification.ipynb) demo demonstrates an end-to-end solution for image recognition: the application uses TensorFlow, Keras, Horovod, and Nuclio to build and train an ML model that identifies (recognizes) and classifies images.
The application consists of four MLRun and Nuclio functions for performing the following operations:

This example is using TensorFlow, Horovod, and Nuclio demonstrating end to end solution for image classification,
it consists of 4 MLRun and Nuclio functions:
1. Import an image archive from from an Amazon Simple Storage (S3) bucket to the platform's data store.
2. Tag the images based on their name structure.
3. Train the image-classification ML model by using [TensorFlow](https://www.tensorflow.org/) and [Keras](https://keras.io/); use [Horovod](https://eng.uber.com/horovod/) to perform distributed training over either GPUs or CPUs.
4. Automatically deploy a Nuclio model-serving function from [Jupyter Notebook](nuclio-serving-tf-images.ipynb) or from a [Dockerfile](./inference-docker).

1. import an image archive from S3 to the cluster file system
2. Tag the images based on their name structure
3. Distrubuted training using TF, Keras and Horovod
4. Automated deployment of Nuclio model serving function (form [Notebook](nuclio-serving-tf-images.ipynb) and from [Dockerfile](./inference-docker))

The Example also demonstrate an [automated pipeline](mlrun_mpijob_pipe.ipynb) using MLRun and KubeFlow pipelines
This demo also provides an example of an [automated pipeline](image-classification/02-create_pipeline.ipynb) using [MLRun](https://github.com/mlrun/mlrun) and [Kubeflow pipelines](https://github.com/kubeflow/pipelines).

<a id="netops-demo"></a>
## Predictive Infrastructure Monitoring
Expand Down
10 changes: 6 additions & 4 deletions demos/gpu/README.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,16 @@
"- A **horovod** directory with applications that use Uber's [Horovod](https://eng.uber.com/horovod/) distributed deep-learning framework, which can be used to convert a single-GPU TensorFlow, Keras, or PyTorch model-training program to a distributed program that trains the model simultaneously over multiple GPUs.\n",
" The objective is to speed up your model training with minimal changes to your existing single-GPU code and without complicating the execution.\n",
" Horovod code can also run over CPUs with only minor modifications.\n",
" The Horovod tutorials include the following:\n",
" - Benchmark tests (**benchmark-tf.ipynb**, which executes **tf_cnn_benchmarks.py**).\n",
" - Note that under the demo folder you will find an image classificaiton demo that is also running with Horovod and can be set to run with GPU <br>\n",
" For more information and examples, see the [Horovod GitHub repository](https://github.com/horovod/horovod).\n",
" \n",
" The Horovod GPU tutorials include benchmark tests (**benchmark-tf.ipynb**, which executes **tf_cnn_benchmarks.py**).<br>\n",
" In addition, the image-classification demo ([**demos/image-classification/**](../image-classification/01-image-classification.ipynb)) demonstrates how to use Horovod for image recognition, and can be configured to run over GPUs.\n",
"\n",
"- A **rapids** directory with applications that use NVIDIA's [RAPIDS](https://rapids.ai/) open-source libraries suite for executing end-to-end data science and analytics pipelines entirely on GPUs.\n",
"\n",
" The RAPIDS tutorials include the following:\n",
"\n",
" - Demo applications that use the [cuDF](https://rapidsai.github.io/projects/cudf/en/latest/index.html) RAPIDS GPU DataFrame library to perform batching and aggregation of data that's read from a Kafaka stream, and then write the results to a Parquet file.<br>\n",
" - Demo applications that use the [cuDF](https://rapidsai.github.io/projects/cudf/en/latest/index.html) RAPIDS GPU DataFrame library to perform batching and aggregation of data that's read from a Kafka stream, and then write the results to a Parquet file.<br>\n",
" The **nuclio-cudf-agg.ipynb** demo implements this by using a Nuclio serverless function while the **python-agg.ipynb** demo implements this by using a standalone Python function.\n",
" - Benchmark tests that compare the performance of RAPIDS cuDF to pandas DataFrames (**benchmark-cudf-vs-pd.ipynb**)."
]
Expand Down
9 changes: 3 additions & 6 deletions demos/gpu/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,17 +16,14 @@ The **demos/gpu** directory includes the following:
Horovod code can also run over CPUs with only minor modifications.
For more information and examples, see the [Horovod GitHub repository](https://github.com/horovod/horovod).

The Horovod tutorials include the following:

- An image-recognition demo application for execution over GPUs (**image-classification**).
- A slightly modified version of the GPU image-classification demo application for execution over CPUs (**cpu/image-classification**).
- Benchmark tests (**benchmark-tf.ipynb**, which executes **tf_cnn_benchmarks.py**).
The Horovod GPU tutorials include benchmark tests (**benchmark-tf.ipynb**, which executes **tf_cnn_benchmarks.py**).<br>
In addition, the image-classification demo ([**demos/image-classification/**](../image-classification/01-image-classification.ipynb)) demonstrates how to use Horovod for image recognition, and can be configured to run over GPUs.

- A **rapids** directory with applications that use NVIDIA's [RAPIDS](https://rapids.ai/) open-source libraries suite for executing end-to-end data science and analytics pipelines entirely on GPUs.

The RAPIDS tutorials include the following:

- Demo applications that use the [cuDF](https://rapidsai.github.io/projects/cudf/en/latest/index.html) RAPIDS GPU DataFrame library to perform batching and aggregation of data that's read from a Kafaka stream, and then write the results to a Parquet file.<br>
- Demo applications that use the [cuDF](https://rapidsai.github.io/projects/cudf/en/latest/index.html) RAPIDS GPU DataFrame library to perform batching and aggregation of data that's read from a Kafka stream, and then write the results to a Parquet file.<br>
The **nuclio-cudf-agg.ipynb** demo implements this by using a Nuclio serverless function while the **python-agg.ipynb** demo implements this by using a standalone Python function.
- Benchmark tests that compare the performance of RAPIDS cuDF to pandas DataFrames (**benchmark-cudf-vs-pd.ipynb**).

Expand Down
71 changes: 71 additions & 0 deletions demos/image-classification/README.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Image Classification Using Distributed Training\n",
"\n",
"- [Overview](#image-classif-demo-overview)\n",
"- [Notebooks and Code](#image-classif-demo-nbs-n-code)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"image-classif-demo-overview\"></a>\n",
"## Overview\n",
"\n",
"This demo demonstrates an end-to-end solution for image recognition: the application uses TensorFlow, Keras, Horovod, and Nuclio to build and train an ML model that identifies (recognizes) and classifies images. \n",
"The application consists of four MLRun and Nuclio functions for performing the following operations:\n",
"\n",
"1. Import an image archive from from an Amazon Simple Storage (S3) bucket to the platform's data store.\n",
"2. Tag the images based on their name structure.\n",
"3. Train the image-classification ML model by using [TensorFlow](https://www.tensorflow.org/) and [Keras](https://keras.io/); use [Horovod](https://eng.uber.com/horovod/) to perform distributed training over either GPUs or CPUs.\n",
"4. Automatically deploy a Nuclio model-serving function from [Jupyter Notebook](nuclio-serving-tf-images.ipynb) or from a [Dockerfile](./inference-docker).\n",
"\n",
"<br><p align=\"center\"><img src=\"workflow.png\" width=\"600\"/></p><br>\n",
"\n",
"This demo also provides an example of an [automated pipeline](image-classification/02-create_pipeline.ipynb) using [MLRun](https://github.com/mlrun/mlrun) and [Kubeflow pipelines](https://github.com/kubeflow/pipelines)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"image-classif-demo-nbs-n-code\"></a>\n",
"## Notebooks and Code\n",
"\n",
"- [**01-image-classification.ipynb**](01-image-classification.ipynb) &mdash; all-in-one: import, tag, launch train, deploy, and serve\n",
"- [**horovod-training.py**](horovod-training.py) &mdash; train function code\n",
"- [**nuclio-serving-tf-images.ipynb**](nuclio-serving-tf-images.ipynb) &mdash; serve function development and test\n",
"- [**02-create_pipeline.ipynb**](02-create_pipeline.ipynb) &mdash; auto-generate a Kubeflow pipeline workflow\n",
"- **inference-docker/** &mdash; build and serve functions using a Dockerfile:\n",
" - [**main.py**](./inference-docker/main.py) &mdash; function code\n",
" - [**Dockerfile**](./inference-docker/Dockerfile) &mdash; a Dockerfile"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
38 changes: 22 additions & 16 deletions demos/image-classification/README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,30 @@
# Image Classification Using Distributed Training

This example is using TensorFlow, Horovod, and Nuclio demonstrating end to end solution for image classification,
it consists of 4 MLRun and Nuclio functions:
- [Overview](#image-classif-demo-overview)
- [Notebooks and Code](#image-classif-demo-nbs-n-code)

1. import an image archive from S3 to the cluster file system
2. Tag the images based on their name structure
3. Distrubuted training using TF, Keras and Horovod
4. Automated deployment of Nuclio model serving function (form [Notebook](nuclio-serving-tf-images.ipynb) and from [Dockerfile](./inference-docker))
<a id="image-classif-demo-overview"></a>
## Overview

<br><p align="center"><img src="workflow.png" width="600"/></p><br>
This demo demonstrates an end-to-end solution for image recognition: the application uses TensorFlow, Keras, Horovod, and Nuclio to build and train an ML model that identifies (recognizes) and classifies images.
The application consists of four MLRun and Nuclio functions for performing the following operations:

1. Import an image archive from from an Amazon Simple Storage (S3) bucket to the platform's data store.
2. Tag the images based on their name structure.
3. Train the image-classification ML model by using [TensorFlow](https://www.tensorflow.org/) and [Keras](https://keras.io/); use [Horovod](https://eng.uber.com/horovod/) to perform distributed training over either GPUs or CPUs.
4. Automatically deploy a Nuclio model-serving function from [Jupyter Notebook](nuclio-serving-tf-images.ipynb) or from a [Dockerfile](./inference-docker).

The Example also demonstrate an [automated pipeline](mlrun_mpijob_pipe.ipynb) using MLRun and KubeFlow pipelines
<br><p align="center"><img src="workflow.png" width="600"/></p><br>

## Notebooks & Code
This demo also provides an example of an [automated pipeline](image-classification/02-create_pipeline.ipynb) using [MLRun](https://github.com/mlrun/mlrun) and [Kubeflow pipelines](https://github.com/kubeflow/pipelines).

* [All-in-one: Import, tag, launch training, deploy serving](01-image-classification.ipynb)
* [Training function code](horovod-training.py)
* [Serving function development and testing](nuclio-serving-tf-images.ipynb)
* [Auto generation of KubeFlow pipelines workflow](02-create_pipeline.ipynb)
* [Building serving function using Dockerfile](./inference-docker)
* [function code](./inference-docker/main.py)
* [Dockerfile](./inference-docker/Dockerfile)
<a id="image-classif-demo-nbs-n-code"></a>
## Notebooks and Code

- [**01-image-classification.ipynb**](01-image-classification.ipynb) &mdash; all-in-one: import, tag, launch train, deploy, and serve
- [**horovod-training.py**](horovod-training.py) &mdash; train function code
- [**nuclio-serving-tf-images.ipynb**](nuclio-serving-tf-images.ipynb) &mdash; serve function development and test
- [**02-create_pipeline.ipynb**](02-create_pipeline.ipynb) &mdash; auto-generate a Kubeflow pipeline workflow
- **inference-docker/** &mdash; build and serve functions using a Dockerfile:
- [**main.py**](./inference-docker/main.py) &mdash; function code
- [**Dockerfile**](./inference-docker/Dockerfile) &mdash; a Dockerfile
Loading

0 comments on commit abe1a92

Please sign in to comment.