Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master' into scatter_eu_reference
Browse files Browse the repository at this point in the history
  • Loading branch information
tomdol committed Jun 26, 2023
2 parents 39071bd + c3b7e81 commit 547d0d4
Show file tree
Hide file tree
Showing 231 changed files with 8,295 additions and 7,067 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ Introduction to ONNX

`ONNX <https://github.com/onnx/onnx>`__ is a representation format for deep learning models that allows AI developers to easily transfer models between different frameworks. It is hugely popular among deep learning tools, like PyTorch, Caffe2, Apache MXNet, Microsoft Cognitive Toolkit, and many others.

.. note:: ONNX models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

Converting an ONNX Model
########################

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@

This page provides general instructions on how to convert a model from a PaddlePaddle format to the OpenVINO IR format using Model Optimizer. The instructions are different depending on PaddlePaddle model format.

.. note:: PaddlePaddle models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

Converting PaddlePaddle Model Inference Format
##############################################

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@

This page provides general instructions on how to run model conversion from a TensorFlow format to the OpenVINO IR format. The instructions are different depending on whether your model was created with TensorFlow v1.X or TensorFlow v2.X.

.. note:: TensorFlow models are supported via :doc:`FrontEnd API <openvino_docs_MO_DG_TensorFlow_Frontend>`. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

To use model conversion API, install OpenVINO Development Tools by following the :doc:`installation instructions <openvino_docs_install_guides_install_dev_tools>`.

Converting TensorFlow 1 Models
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ To convert a TensorFlow Lite model, use the ``mo`` script and specify the path t

mo --input_model <INPUT_MODEL>.tflite

.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API.
.. note:: TensorFlow Lite models are supported via FrontEnd API. You may skip conversion to IR and read models directly by OpenVINO runtime API. Refer to the :doc:`inference example <openvino_docs_OV_UG_Integrate_OV_with_your_application>` for more details. Using ``convert_model`` is still necessary in more complex cases, such as new custom inputs/outputs in model pruning, adding pre-processing, or using Python conversion extensions.

Supported TensorFlow Lite Layers
###################################
Expand Down
2 changes: 1 addition & 1 deletion docs/nbdoc/consts.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

repo_name = "openvino_notebooks"

artifacts_link = "http://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20230529220816/dist/rst_files/"
artifacts_link = "http://repository.toolbox.iotg.sclab.intel.com/projects/ov-notebook/0.1.0-latest/20230621220808/dist/rst_files/"

blacklisted_extensions = ['.xml', '.bin']

Expand Down
49 changes: 45 additions & 4 deletions docs/notebooks/001-hello-world-with-output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ This basic introduction to OpenVINO™ shows how to do inference with an
image classification model.

A pre-trained `MobileNetV3
model <https://docs.openvino.ai/latest/omz_models_model_mobilenet_v3_small_1_0_224_tf.html>`__
model <https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html>`__
from `Open Model
Zoo <https://github.com/openvinotoolkit/open_model_zoo/>`__ is used in
this tutorial. For more information about how OpenVINO IR models are
Expand All @@ -18,18 +18,59 @@ Imports

.. code:: ipython3
from pathlib import Path
import sys
import cv2
import matplotlib.pyplot as plt
import numpy as np
from openvino.runtime import Core
sys.path.append("../utils")
from notebook_utils import download_file
Download the Model and data samples
-----------------------------------

.. code:: ipython3
base_artifacts_dir = Path('./artifacts').expanduser()
model_name = "v3-small_224_1.0_float"
model_xml_name = f'{model_name}.xml'
model_bin_name = f'{model_name}.bin'
model_xml_path = base_artifacts_dir / model_xml_name
base_url = 'https://storage.openvinotoolkit.org/repositories/openvino_notebooks/models/mobelinet-v3-tf/FP32/'
if not model_xml_path.exists():
download_file(base_url + model_xml_name, model_xml_name, base_artifacts_dir)
download_file(base_url + model_bin_name, model_bin_name, base_artifacts_dir)
else:
print(f'{model_name} already downloaded to {base_artifacts_dir}')
.. parsed-literal::
artifacts/v3-small_224_1.0_float.xml: 0%| | 0.00/294k [00:00<?, ?B/s]
.. parsed-literal::
artifacts/v3-small_224_1.0_float.bin: 0%| | 0.00/4.84M [00:00<?, ?B/s]
Load the Model
--------------

.. code:: ipython3
ie = Core()
model = ie.read_model(model="model/v3-small_224_1.0_float.xml")
model = ie.read_model(model=model_xml_path)
compiled_model = ie.compile_model(model=model, device_name="CPU")
output_layer = compiled_model.output(0)
Expand All @@ -47,11 +88,11 @@ Load an Image
# Reshape to model input shape.
input_image = np.expand_dims(input_image, 0)
plt.imshow(image)
plt.imshow(image);
.. image:: 001-hello-world-with-output_files/001-hello-world-with-output_6_0.png
.. image:: 001-hello-world-with-output_files/001-hello-world-with-output_8_0.png


Do Inference
Expand Down
6 changes: 3 additions & 3 deletions docs/notebooks/001-hello-world-with-output_files/index.html
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<html>
<head><title>Index of /projects/ov-notebook/0.1.0-latest/20230529220816/dist/rst_files/001-hello-world-with-output_files/</title></head>
<head><title>Index of /projects/ov-notebook/0.1.0-latest/20230621220808/dist/rst_files/001-hello-world-with-output_files/</title></head>
<body bgcolor="white">
<h1>Index of /projects/ov-notebook/0.1.0-latest/20230529220816/dist/rst_files/001-hello-world-with-output_files/</h1><hr><pre><a href="../">../</a>
<a href="001-hello-world-with-output_6_0.png">001-hello-world-with-output_6_0.png</a> 30-May-2023 00:09 387941
<h1>Index of /projects/ov-notebook/0.1.0-latest/20230621220808/dist/rst_files/001-hello-world-with-output_files/</h1><hr><pre><a href="../">../</a>
<a href="001-hello-world-with-output_8_0.png">001-hello-world-with-output_8_0.png</a> 22-Jun-2023 00:06 387941
</pre><hr></body>
</html>
Loading

0 comments on commit 547d0d4

Please sign in to comment.