Skip to content

Commit

Permalink
Update at 2024-01-28 10:57:09.860757
Browse files Browse the repository at this point in the history
  • Loading branch information
mlc-bot committed Jan 28, 2024
1 parent e21d05f commit b0cf71a
Show file tree
Hide file tree
Showing 67 changed files with 3,733 additions and 14,682 deletions.
51 changes: 24 additions & 27 deletions _sources/deep_dive/tensor_ir/tutorials/creation.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ format of the ir_module and in TVMScript:

.. GENERATED FROM PYTHON SOURCE LINES 54-93
.. code-block:: default
.. code-block:: Python
Expand Down Expand Up @@ -121,7 +121,7 @@ streamline the code:

.. GENERATED FROM PYTHON SOURCE LINES 103-126
.. code-block:: default
.. code-block:: Python
Expand Down Expand Up @@ -159,7 +159,7 @@ We can use the following code to verify that the two modules are equivalent:

.. GENERATED FROM PYTHON SOURCE LINES 128-131
.. code-block:: default
.. code-block:: Python
print(tvm.ir.structural_equal(MyModule, ConciseModule))
Expand Down Expand Up @@ -187,7 +187,7 @@ be used to ascertain the shape and data type of a TensorIR.

.. GENERATED FROM PYTHON SOURCE LINES 137-165
.. code-block:: default
.. code-block:: Python
# Python variables
Expand Down Expand Up @@ -230,7 +230,7 @@ Check the equivalence:

.. GENERATED FROM PYTHON SOURCE LINES 167-171
.. code-block:: default
.. code-block:: Python
print(tvm.ir.structural_equal(ConciseModule, ConciseModuleFromPython))
Expand Down Expand Up @@ -259,7 +259,7 @@ be used to ascertain the shape and data type of a TensorIR.

.. GENERATED FROM PYTHON SOURCE LINES 177-203
.. code-block:: default
.. code-block:: Python
Expand Down Expand Up @@ -300,7 +300,7 @@ Now let's check the runtime dynamic shape inference:

.. GENERATED FROM PYTHON SOURCE LINES 205-221
.. code-block:: default
.. code-block:: Python
Expand All @@ -326,17 +326,17 @@ Now let's check the runtime dynamic shape inference:

.. code-block:: none
[[0.74369144 1.2857096 0.9312536 1.4938402 ]
[0.8457197 1.1344357 1.0167718 1.4015458 ]
[0.6215172 0.8697282 0.73780996 1.08859 ]
[0.8444467 1.4817338 1.0255342 1.6752933 ]]
[[31.81367 29.322643 33.46643 ... 32.066936 31.843338 31.997297]
[34.21587 29.761692 33.985676 ... 33.975616 30.549124 32.943256]
[36.668354 35.634075 37.476894 ... 36.07737 33.93837 33.803913]
[[0.5621404 0.543729 0.61239976 0.3138216 ]
[1.2092495 1.08359 1.2053614 0.72090256]
[1.2344809 1.139432 1.3031573 0.795681 ]
[1.895048 1.5549614 1.5495265 1.25517 ]]
[[30.134668 29.863676 32.996834 ... 30.820768 33.188026 28.581135]
[31.773392 30.470835 31.417461 ... 30.190342 32.607628 30.425243]
[34.150497 33.43621 34.17341 ... 35.03381 36.493015 31.566217]
...
[36.620438 31.921883 34.284283 ... 34.45535 31.221395 31.307842]
[29.393274 27.178171 29.224554 ... 29.862112 27.953592 29.040247]
[34.58329 31.312542 30.833471 ... 32.808777 32.16594 32.481743]]
[34.130653 33.914062 34.77706 ... 33.488625 33.282394 31.920208]
[31.484621 30.208483 30.878523 ... 30.587534 32.167362 30.47426 ]
[30.489397 29.572134 32.110092 ... 29.733181 33.30254 28.844995]]
Expand Down Expand Up @@ -366,7 +366,7 @@ TE creation method.

.. GENERATED FROM PYTHON SOURCE LINES 242-251
.. code-block:: default
.. code-block:: Python
from tvm import te
Expand Down Expand Up @@ -402,7 +402,7 @@ and one output parameter **C**.

.. GENERATED FROM PYTHON SOURCE LINES 265-270
.. code-block:: default
.. code-block:: Python
te_func = te.create_prim_func([A, B, C]).with_attr({"global_symbol": "mm_relu"})
Expand Down Expand Up @@ -455,7 +455,7 @@ is that we need to specify the shape of the input tensors as symbolic variables.

.. GENERATED FROM PYTHON SOURCE LINES 275-287
.. code-block:: default
.. code-block:: Python
# Declare symbolic variables
Expand Down Expand Up @@ -514,7 +514,7 @@ is that we need to specify the shape of the input tensors as symbolic variables.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.123 seconds)
**Total running time of the script:** (0 minutes 0.128 seconds)


.. _sphx_glr_download_deep_dive_tensor_ir_tutorials_creation.py:
Expand All @@ -523,13 +523,10 @@ is that we need to specify the shape of the input tensors as symbolic variables.

.. container:: sphx-glr-footer sphx-glr-footer-example

.. container:: sphx-glr-download sphx-glr-download-jupyter


:download:`Download Jupyter notebook: creation.ipynb <creation.ipynb>`

.. container:: sphx-glr-download sphx-glr-download-python

:download:`Download Python source code: creation.py <creation.py>`

.. container:: sphx-glr-download sphx-glr-download-jupyter

:download:`Download Jupyter notebook: creation.ipynb <creation.ipynb>`
:download:`Download python source code: creation.py <creation.py>`
37 changes: 31 additions & 6 deletions _sources/deep_dive/tensor_ir/tutorials/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,35 @@

Computation times
=================
**00:00.367** total execution time for **deep_dive_tensor_ir_tutorials** files:
**00:00.379** total execution time for 2 files **from deep_dive/tensor_ir/tutorials**:

+-----------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_transformation.py` (``transformation.py``) | 00:00.244 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
| :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_creation.py` (``creation.py``) | 00:00.123 | 0.0 MB |
+-----------------------------------------------------------------------------------------+-----------+--------+
.. container::

.. raw:: html

<style scoped>
<link href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/5.3.0/css/bootstrap.min.css" rel="stylesheet" />
<link href="https://cdn.datatables.net/1.13.6/css/dataTables.bootstrap5.min.css" rel="stylesheet" />
</style>
<script src="https://code.jquery.com/jquery-3.7.0.js"></script>
<script src="https://cdn.datatables.net/1.13.6/js/jquery.dataTables.min.js"></script>
<script src="https://cdn.datatables.net/1.13.6/js/dataTables.bootstrap5.min.js"></script>
<script type="text/javascript" class="init">
$(document).ready( function () {
$('table.sg-datatable').DataTable({order: [[1, 'desc']]});
} );
</script>

.. list-table::
:header-rows: 1
:class: table table-striped sg-datatable

* - Example
- Time
- Mem (MB)
* - :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_transformation.py` (``transformation.py``)
- 00:00.251
- 0.0
* - :ref:`sphx_glr_deep_dive_tensor_ir_tutorials_creation.py` (``creation.py``)
- 00:00.128
- 0.0
39 changes: 18 additions & 21 deletions _sources/deep_dive/tensor_ir/tutorials/transformation.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ First, let's take a look at the implementation of ``mm_relu`` in the previous se

.. GENERATED FROM PYTHON SOURCE LINES 37-65
.. code-block:: default
.. code-block:: Python
import tvm
Expand Down Expand Up @@ -87,7 +87,7 @@ original implementation.

.. GENERATED FROM PYTHON SOURCE LINES 68-92
.. code-block:: default
.. code-block:: Python
import numpy as np
Expand Down Expand Up @@ -123,7 +123,7 @@ original implementation.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
2.1481 2.1481 2.1481 2.1481 0.0000
2.1703 2.1703 2.1703 2.1703 0.0000
Expand All @@ -137,7 +137,7 @@ utilizing the provided **MyModule** as input.

.. GENERATED FROM PYTHON SOURCE LINES 97-100
.. code-block:: default
.. code-block:: Python
sch = tvm.tir.Schedule(MyModule)
Expand All @@ -158,7 +158,7 @@ block **Y** and its associated loops.

.. GENERATED FROM PYTHON SOURCE LINES 105-109
.. code-block:: default
.. code-block:: Python
block_Y = sch.get_block("Y")
Expand All @@ -181,7 +181,7 @@ non-existence of variable ``j``.

.. GENERATED FROM PYTHON SOURCE LINES 115-118
.. code-block:: default
.. code-block:: Python
j0, j1 = sch.split(j, factors=[None, 8])
Expand All @@ -199,7 +199,7 @@ The outcome of the transformation can be examined, as it is retained within ``sc

.. GENERATED FROM PYTHON SOURCE LINES 120-123
.. code-block:: default
.. code-block:: Python
sch.mod.show()
Expand Down Expand Up @@ -251,7 +251,7 @@ action involves reordering these two loops.

.. GENERATED FROM PYTHON SOURCE LINES 127-132
.. code-block:: default
.. code-block:: Python
sch.reorder(j0, k, j1)
Expand Down Expand Up @@ -295,7 +295,7 @@ action involves reordering these two loops.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
0.6714 0.6714 0.6714 0.6714 0.0000
0.6824 0.6824 0.6824 0.6824 0.0000
Expand All @@ -310,7 +310,7 @@ variant. First, we employ a primitive known as **reverse_compute_at** to relocat

.. GENERATED FROM PYTHON SOURCE LINES 138-143
.. code-block:: default
.. code-block:: Python
block_C = sch.get_block("C")
Expand Down Expand Up @@ -372,7 +372,7 @@ from the reduction update via the **decompose_reduction** primitive.

.. GENERATED FROM PYTHON SOURCE LINES 153-158
.. code-block:: default
.. code-block:: Python
sch.decompose_reduction(block_Y, k)
Expand Down Expand Up @@ -423,7 +423,7 @@ from the reduction update via the **decompose_reduction** primitive.
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
0.2610 0.2610 0.2610 0.2610 0.0000
0.2651 0.2651 0.2651 0.2651 0.0000
Expand All @@ -441,7 +441,7 @@ of the schedule by ``sch.trace``.

.. GENERATED FROM PYTHON SOURCE LINES 167-170
.. code-block:: default
.. code-block:: Python
sch.trace.show()
Expand Down Expand Up @@ -474,7 +474,7 @@ Alternatively, we can output the IRModule in conjunction with the historical tra

.. GENERATED FROM PYTHON SOURCE LINES 172-174
.. code-block:: default
.. code-block:: Python
sch.show()
Expand Down Expand Up @@ -537,7 +537,7 @@ Alternatively, we can output the IRModule in conjunction with the historical tra
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 0.244 seconds)
**Total running time of the script:** (0 minutes 0.251 seconds)


.. _sphx_glr_download_deep_dive_tensor_ir_tutorials_transformation.py:
Expand All @@ -546,13 +546,10 @@ Alternatively, we can output the IRModule in conjunction with the historical tra

.. container:: sphx-glr-footer sphx-glr-footer-example

.. container:: sphx-glr-download sphx-glr-download-jupyter


:download:`Download Jupyter notebook: transformation.ipynb <transformation.ipynb>`

.. container:: sphx-glr-download sphx-glr-download-python

:download:`Download Python source code: transformation.py <transformation.py>`

.. container:: sphx-glr-download sphx-glr-download-jupyter

:download:`Download Jupyter notebook: transformation.ipynb <transformation.ipynb>`
:download:`Download python source code: transformation.py <transformation.py>`
26 changes: 19 additions & 7 deletions _sources/get_started/install.rst.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,20 @@
.. Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
.. http://www.apache.org/licenses/LICENSE-2.0
.. Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
.. _install:

Installing Apache TVM Unity
Expand Down Expand Up @@ -43,16 +60,11 @@ For Ubuntu/Debian users, the following APT Repository may help:
Step 2. Get Source from Github
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
First, You can also choose to clone the source repo from Github. The code of Apache TVM Unity is hosted
under the `Apache TVM <https://github.com/apache/tvm>`_ but with a different ``unity`` branch.
under the `Apache TVM <https://github.com/apache/tvm>`_

.. code:: bash
git clone https://github.com/apache/tvm -b unity tvm-unity --recursive
.. note::
Need to use ``-b unity`` to checkout the ``unity`` branch. Or you can use ``git switch unity`` after
cloning the repository.
git clone https://github.com/apache/tvm tvm-unity --recursive
.. note::
It's important to use the ``--recursive`` flag when cloning the TVM Unity repository, which will
Expand Down
Loading

0 comments on commit b0cf71a

Please sign in to comment.