-
Notifications
You must be signed in to change notification settings - Fork 247
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
152 changed files
with
41,153 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
# Sphinx build info version 1 | ||
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. | ||
config: a84346970d4342084b42a1639aa49697 | ||
tags: 645f666f9bcd5a90fca523b33c5a78b7 |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
.. meta:: | ||
:description: This website introduces Intel® Extension for PyTorch* | ||
:keywords: Intel optimization, PyTorch, Intel® Extension for PyTorch*, GPU, discrete GPU, Intel discrete GPU | ||
|
||
Welcome to Intel® Extension for PyTorch* Documentation | ||
###################################################### | ||
|
||
Intel® Extension for PyTorch* extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X\ :sup:`e`\ Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* `xpu` device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*. | ||
|
||
Intel® Extension for PyTorch* provides optimizations for both eager mode and graph mode, however, compared to eager mode, graph mode in PyTorch* normally yields better performance from optimization techniques, such as operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Therefore we recommend you to take advantage of Intel® Extension for PyTorch* with `TorchScript <https://pytorch.org/docs/stable/jit.html>`_ whenever your workload supports it. You could choose to run with `torch.jit.trace()` function or `torch.jit.script()` function, but based on our evaluation, `torch.jit.trace()` supports more workloads so we recommend you to use `torch.jit.trace()` as your first choice. | ||
|
||
The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by importing `intel_extension_for_pytorch`. | ||
|
||
Intel® Extension for PyTorch* is structured as shown in the following figure: | ||
|
||
.. figure:: ./images/intel_extension_for_pytorch_structure.png | ||
:width: 800 | ||
:align: center | ||
:alt: Architecture of Intel® Extension for PyTorch* | ||
|
||
| | ||
Optimizations for both eager mode and graph mode contribute to extra performance accelerations with the extension. In eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance boost is available by converting the eager-mode model into graph mode via extended graph fusion passes. In the graph mode, the fusions reduce operator/kernel invocation overheads, and thus increase performance. On CPU, Intel® Extension for PyTorch* dispatches the operators into their underlying kernels automatically based on ISA that it detects and leverages vectorization and matrix acceleration units available on Intel hardware. Intel® Extension for PyTorch* runtime extension brings better efficiency with finer-grained thread runtime control and weight sharing. On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the `DPC++ <https://github.com/intel/llvm#oneapi-dpc-compiler>`_ compiler that supports the latest `SYCL* <https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html>`_ standard and also a number of extensions to the SYCL* standard, which can be found in the `sycl/doc/extensions <https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions>`_ directory. | ||
|
||
.. note:: GPU features are not included in CPU only packages. | ||
|
||
Intel® Extension for PyTorch* has been released as an open–source project at `Github <https://github.com/intel/intel-extension-for-pytorch>`_. Source code is available at `xpu-master branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master>`_. Check `the tutorial <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/>`_ for detailed information. Due to different development schedule, optimizations for CPU only might have a newer code base. Source code is available at `master branch <https://github.com/intel/intel-extension-for-pytorch/tree/master>`_. Check `the CPU tutorial <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/>`_ for detailed information on the CPU side. | ||
|
||
.. toctree:: | ||
:hidden: | ||
:maxdepth: 1 | ||
|
||
tutorials/getting_started | ||
tutorials/cheat_sheet | ||
tutorials/features | ||
tutorials/releases | ||
tutorials/installation | ||
tutorials/examples | ||
tutorials/api_doc | ||
tutorials/performance_tuning | ||
tutorials/technical_details | ||
tutorials/blogs_publications | ||
tutorials/contribution | ||
tutorials/license |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,137 @@ | ||
API Documentation | ||
################# | ||
|
||
Device-Agnostic | ||
*************** | ||
|
||
.. currentmodule:: intel_extension_for_pytorch | ||
.. autofunction:: optimize | ||
.. autofunction:: get_fp32_math_mode | ||
.. autofunction:: set_fp32_math_mode | ||
.. autoclass:: verbose | ||
|
||
GPU-Specific | ||
************ | ||
|
||
Miscellaneous | ||
============= | ||
|
||
.. currentmodule:: intel_extension_for_pytorch.xpu | ||
.. StreamContext | ||
.. can_device_access_peer | ||
.. current_blas_handle | ||
.. autofunction:: current_device | ||
.. autofunction:: current_stream | ||
.. default_stream | ||
.. autoclass:: device | ||
.. autofunction:: device_count | ||
.. autoclass:: device_of | ||
.. autofunction:: getDeviceIdListForCard | ||
.. autofunction:: get_device_name | ||
.. autofunction:: get_device_properties | ||
.. get_gencode_flags | ||
.. get_sync_debug_mode | ||
.. autofunction:: init | ||
.. ipc_collect | ||
.. autofunction:: is_available | ||
.. autofunction:: is_initialized | ||
.. memory_usage | ||
.. autofunction:: set_device | ||
.. set_stream | ||
.. autofunction:: stream | ||
.. autofunction:: synchronize | ||
|
||
Random Number Generator | ||
======================= | ||
|
||
.. currentmodule:: intel_extension_for_pytorch.xpu | ||
.. autofunction:: get_rng_state | ||
.. autofunction:: get_rng_state_all | ||
.. autofunction:: set_rng_state | ||
.. autofunction:: set_rng_state_all | ||
.. autofunction:: manual_seed | ||
.. autofunction:: manual_seed_all | ||
.. autofunction:: seed | ||
.. autofunction:: seed_all | ||
.. autofunction:: initial_seed | ||
|
||
Streams and events | ||
================== | ||
|
||
.. currentmodule:: intel_extension_for_pytorch.xpu | ||
.. autoclass:: Stream | ||
:members: | ||
.. ExternalStream | ||
.. autoclass:: Event | ||
:members: | ||
|
||
Memory management | ||
================= | ||
|
||
.. currentmodule:: intel_extension_for_pytorch.xpu | ||
.. autofunction:: empty_cache | ||
.. list_gpu_processes | ||
.. mem_get_info | ||
.. autofunction:: memory_stats | ||
.. autofunction:: memory_summary | ||
.. autofunction:: memory_snapshot | ||
.. autofunction:: memory_allocated | ||
.. autofunction:: max_memory_allocated | ||
.. reset_max_memory_allocated | ||
.. autofunction:: memory_reserved | ||
.. autofunction:: max_memory_reserved | ||
.. set_per_process_memory_fraction | ||
.. memory_cached | ||
.. max_memory_cached | ||
.. reset_max_memory_cached | ||
.. autofunction:: reset_peak_memory_stats | ||
.. caching_allocator_alloc | ||
.. caching_allocator_delete | ||
.. autofunction:: memory_stats_as_nested_dict | ||
.. autofunction:: reset_accumulated_memory_stats | ||
|
||
C++ API | ||
======= | ||
|
||
.. doxygenenum:: xpu::FP32_MATH_MODE | ||
|
||
.. doxygenfunction:: xpu::set_fp32_math_mode | ||
|
||
.. doxygenfunction:: xpu::get_queue_from_stream | ||
|
||
|
||
CPU-Specific | ||
************ | ||
|
||
Miscellaneous | ||
============= | ||
|
||
.. currentmodule:: intel_extension_for_pytorch | ||
.. autofunction:: enable_onednn_fusion | ||
|
||
Quantization | ||
============ | ||
|
||
.. automodule:: intel_extension_for_pytorch.quantization | ||
.. autofunction:: prepare | ||
.. autofunction:: convert | ||
|
||
Experimental API, introduction is avaiable at `feature page <./features/int8_recipe_tuning_api.md>`_. | ||
|
||
.. autofunction:: autotune | ||
|
||
CPU Runtime | ||
=========== | ||
|
||
.. automodule:: intel_extension_for_pytorch.cpu.runtime | ||
.. autofunction:: is_runtime_ext_enabled | ||
.. autoclass:: CPUPool | ||
.. autoclass:: pin | ||
.. autoclass:: MultiStreamModuleHint | ||
.. autoclass:: MultiStreamModule | ||
.. autoclass:: Task | ||
.. autofunction:: get_core_list_of_node_id | ||
|
||
.. .. automodule:: intel_extension_for_pytorch.quantization | ||
.. :members: |
39 changes: 39 additions & 0 deletions
39
xpu/2.0.110+xpu/_sources/tutorials/blogs_publications.md.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
Blogs & Publications | ||
==================== | ||
|
||
* [Accelerate Llama 2 with Intel AI Hardware and Software Optimizations, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html) | ||
* [Accelerate PyTorch\* Training and Inference Performance using Intel® AMX, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-training-inference-on-amx.html) | ||
* [Intel® Deep Learning Boost (Intel® DL Boost) - Improve Inference Performance of Hugging Face BERT Base Model in Google Cloud Platform (GCP) Technology Guide, Apr 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-intel-dl-boost-improve-inference-performance-of-hugging-face-bert-base-model-in-google-cloud-platform-gcp-technology-guide) | ||
* [Get Started with Intel® Extension for PyTorch\* on GPU | Intel Software, Mar 2023](https://www.youtube.com/watch?v=Id-rE2Q7xZ0&t=1s) | ||
* [Accelerate PyTorch\* INT8 Inference with New “X86” Quantization Backend on X86 CPUs, Mar 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html) | ||
* [Accelerating PyTorch Transformers with Intel Sapphire Rapids, Part 1, Jan 2023](https://huggingface.co/blog/intel-sapphire-rapids) | ||
* [Intel® Deep Learning Boost - Improve Inference Performance of BERT Base Model from Hugging Face for Network Security Technology Guide, Jan 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-improve-inference-performance-of-bert-base-model-from-hugging-face-for-network-security-technology-guide) | ||
* [Scaling inference on CPUs with TorchServe, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=066_Jd6cwZg) | ||
* [What is New in Intel Extension for PyTorch, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=SE56wFXdvP4&t=1s) | ||
* [Accelerating PyG on Intel CPUs, Dec 2022](https://www.pyg.org/ns-newsarticle-accelerating-pyg-on-intel-cpus) | ||
* [Accelerating PyTorch Deep Learning Models on Intel XPUs, Dec, 2022](https://www.oneapi.io/event-sessions/accelerating-pytorch-deep-learning-models-on-intel-xpus-2-ai-hpc-2022/) | ||
* [Introducing the Intel® Extension for PyTorch\* for GPUs, Dec 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/introducing-intel-extension-for-pytorch-for-gpus.html) | ||
* [PyTorch Stable Diffusion Using Hugging Face and Intel Arc, Nov 2022](https://towardsdatascience.com/pytorch-stable-diffusion-using-hugging-face-and-intel-arc-77010e9eead6) | ||
* [PyTorch 1.13: New Potential for AI Developers to Enhance Model Performance and Accuracy, Nov 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/pytorch-1-13-new-potential-for-ai-developers.html) | ||
* [Easy Quantization in PyTorch Using Fine-Grained FX, Sep 2022](https://medium.com/intel-analytics-software/easy-quantization-in-pytorch-using-fine-grained-fx-80be2c4bc2d6) | ||
* [Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16, Aug 2022](https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/) | ||
* [Accelerating PyTorch Vision Models with Channels Last on CPU, Aug 2022](https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/) | ||
* [One-Click Enabling of Intel Neural Compressor Features in PyTorch Scripts, Aug 2022](https://medium.com/intel-analytics-software/one-click-enable-intel-neural-compressor-features-in-pytorch-scripts-5d4e31f5a22b) | ||
* [Increase PyTorch Inference Throughput by 4x, Jul 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/increase-pytorch-inference-throughput-by-4x.html) | ||
* [PyTorch Inference Acceleration with Intel® Neural Compressor, Jun 2022](https://medium.com/pytorch/pytorch-inference-acceleration-with-intel-neural-compressor-842ef4210d7d) | ||
* [Accelerating PyTorch with Intel® Extension for PyTorch, May 2022](https://medium.com/pytorch/accelerating-pytorch-with-intel-extension-for-pytorch-3aef51ea3722) | ||
* [Grokking PyTorch Intel CPU performance from first principles (parts 1), Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html) | ||
* [Grokking PyTorch Intel CPU performance from first principles (parts 2), Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex_2.html) | ||
* [Grokking PyTorch Intel CPU performance from first principles, Apr 2022](https://medium.com/pytorch/grokking-pytorch-intel-cpu-performance-from-first-principles-7e39694412db) | ||
* [KT Optimizes Performance for Personalized Text-to-Speech, Nov 2021](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/KT-Optimizes-Performance-for-Personalized-Text-to-Speech/post/1337757) | ||
* [Accelerating PyTorch distributed fine-tuning with Intel technologies, Nov 2021](https://huggingface.co/blog/accelerating-pytorch) | ||
* [Scaling up BERT-like model Inference on modern CPU - parts 1, Apr 2021](https://huggingface.co/blog/bert-cpu-scaling-part-1) | ||
* [Scaling up BERT-like model Inference on modern CPU - parts 2, Nov 2021](https://huggingface.co/blog/bert-cpu-scaling-part-2) | ||
* [NAVER: Low-Latency Machine-Learning Inference](https://www.intel.com/content/www/us/en/customer-spotlight/stories/naver-ocr-customer-story.html) | ||
* [Intel® Extensions for PyTorch, Feb 2021](https://pytorch.org/tutorials/recipes/recipes/intel_extension_for_pytorch.html) | ||
* [Optimizing DLRM by using PyTorch with oneCCL Backend, Feb 2021](https://pytorch.medium.com/optimizing-dlrm-by-using-pytorch-with-oneccl-backend-9f85b8ef6929) | ||
* [Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology, Feb 2021](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f) | ||
*Note*: APIs mentioned in it are deprecated. | ||
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability, Jun 2020](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659) | ||
* [Intel and Facebook\* collaborate to boost PyTorch\* CPU performance, Apr 2019](https://www.intel.com/content/www/us/en/developer/articles/case-study/intel-and-facebook-collaborate-to-boost-pytorch-cpu-performance.html) | ||
* [Intel and Facebook\* Collaborate to Boost Caffe\*2 Performance on Intel CPU’s, Apr 2017](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-and-facebook-collaborate-to-boost-caffe2-performance-on-intel-cpu-s.html) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
Cheat Sheet | ||
=========== | ||
|
||
Get started with Intel® Extension for PyTorch\* using the following commands: | ||
|
||
|Description | Command | | ||
| -------- | ------- | | ||
| Basic CPU Installation | `python -m pip install intel_extension_for_pytorch` | | ||
| Basic GPU Installation | `pip install torch==<version> -f https://developer.intel.com/ipex-whl-stable-xpu`<br>`pip install intel_extension_for_pytorch==<version> -f https://developer.intel.com/ipex-whl-stable-xpu`| | ||
| Import Intel® Extension for PyTorch\* | `import intel_extension_for_pytorch as ipex`| | ||
| Capture a Verbose Log (Command Prompt) | `export ONEDNN_VERBOSE=1` | | ||
| Optimization During Training | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br>`model, optimizer = ipex.optimize(model, optimizer=optimizer)`| | ||
| Optimization During Inference | `model = ...`<br>`model.eval()`<br>`model = ipex.optimize(model)` | | ||
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Training (Default FP32) | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br/><br/>`model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)`<br/><br/>`with torch.no_grad():`<br>` with torch.cpu.amp.autocast():`<br>` model(data)` | | ||
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Inference (Default FP32) | `model = ...`<br>`model.eval()`<br/><br/>`model = ipex.optimize(model, dtype=torch.bfloat16)`<br/><br/>`with torch.cpu.amp.autocast():`<br>` model(data)` | ||
| [Experimental] Fast BERT Optimization | `from transformers import BertModel`<br>`model = BertModel.from_pretrained("bert-base-uncased")`<br>`model.eval()`<br/><br/>`model = ipex.fast_bert(model, dtype=torch.bfloat16)`| | ||
| Run CPU Launch Script (Command Prompt): <br>Automate Configuration Settings for Performance | `ipexrun [knobs] <your_pytorch_script> [args]`| | ||
| [Experimental] Run HyperTune to perform hyperparameter/execution configuration search | `python -m intel_extension_for_pytorch.cpu.hypertune --conf-file <your_conf_file> <your_python_script> [args]`| | ||
| [Experimental] Enable Graph capture | `model = …`<br>`model.eval()`<br>`model = ipex.optimize(model, graph_mode=True)`| | ||
| Post-Training INT8 Quantization (Static) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, anyplace=False)`<br/><br/>`for d in calibration_data_loader():`<br>` prepared_model(d)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)`| | ||
| Post-Training INT8 Quantization (Dynamic) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_dynamic_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)` | | ||
| [Experimental] Post-Training INT8 Quantization (Tuning Recipe): | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, inplace=False)`<br/><br/>`tuned_model = ipex.quantization.autotune(prepared_model, calibration_data_loader, eval_function, sampling_sizes=[100],`<br>` accuracy_criterion={'relative': .01}, tuning_time=0)`<br/><br/>`convert_model = ipex.quantization.convert(tuned_model)`| |
Oops, something went wrong.