From 23f7c1c088dfa76cf9b9c9afb89d602cb8b7fdb2 Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Tue, 19 Sep 2023 06:10:27 +0900 Subject: [PATCH] update download links (#2070) * update download links * update download links --- cpu/1.11.200+cpu/_sources/tutorials/installation.md.txt | 6 +++--- cpu/1.11.200+cpu/tutorials/installation.html | 6 +++--- cpu/1.12.0+cpu/_sources/tutorials/installation.md.txt | 6 +++--- cpu/1.12.0+cpu/tutorials/installation.html | 6 +++--- cpu/1.12.100+cpu/_sources/tutorials/installation.md.txt | 6 +++--- cpu/1.12.100+cpu/tutorials/installation.html | 6 +++--- cpu/1.12.300+cpu/_sources/tutorials/installation.md.txt | 6 +++--- cpu/1.12.300+cpu/tutorials/installation.html | 6 +++--- cpu/1.13.0+cpu/_sources/tutorials/installation.md.txt | 4 ++-- cpu/1.13.0+cpu/tutorials/installation.html | 4 ++-- cpu/1.13.100+cpu/_sources/tutorials/installation.md.txt | 4 ++-- cpu/1.13.100+cpu/tutorials/installation.html | 4 ++-- cpu/2.0.0+cpu/_sources/tutorials/installation.md.txt | 4 ++-- cpu/2.0.0+cpu/tutorials/installation.html | 4 ++-- cpu/2.0.100+cpu/_sources/tutorials/installation.md.txt | 4 ++-- cpu/2.0.100+cpu/tutorials/installation.html | 4 ++-- xpu/1.10.200+gpu/_sources/tutorials/installation.md.txt | 4 ++-- xpu/1.10.200+gpu/tutorials/installation.html | 4 ++-- xpu/1.13.10+xpu/_sources/tutorials/features/DDP.md.txt | 6 +++--- xpu/1.13.10+xpu/_sources/tutorials/getting_started.md.txt | 2 +- xpu/1.13.10+xpu/_sources/tutorials/installation.md.txt | 4 ++-- .../tutorials/performance_tuning/known_issues.md.txt | 2 +- xpu/1.13.10+xpu/tutorials/features/DDP.html | 6 +++--- xpu/1.13.10+xpu/tutorials/getting_started.html | 2 +- xpu/1.13.10+xpu/tutorials/installation.html | 4 ++-- .../tutorials/performance_tuning/known_issues.html | 2 +- xpu/1.13.120+xpu/_sources/tutorials/features/DDP.md.txt | 6 +++--- xpu/1.13.120+xpu/_sources/tutorials/getting_started.md.txt | 2 +- xpu/1.13.120+xpu/_sources/tutorials/installation.md.txt | 4 ++-- .../tutorials/performance_tuning/known_issues.md.txt | 2 +- xpu/1.13.120+xpu/tutorials/features/DDP.html | 6 +++--- xpu/1.13.120+xpu/tutorials/getting_started.html | 2 +- xpu/1.13.120+xpu/tutorials/installation.html | 4 ++-- .../tutorials/performance_tuning/known_issues.html | 2 +- xpu/2.0.110+xpu/_sources/tutorials/cheat_sheet.md.txt | 2 +- xpu/2.0.110+xpu/_sources/tutorials/features/DDP.md.txt | 6 +++--- .../_sources/tutorials/installations/linux.rst.txt | 4 ++-- .../_sources/tutorials/installations/windows.rst.txt | 2 +- .../tutorials/performance_tuning/known_issues.md.txt | 2 +- xpu/2.0.110+xpu/tutorials/cheat_sheet.html | 2 +- xpu/2.0.110+xpu/tutorials/features/DDP.html | 6 +++--- xpu/2.0.110+xpu/tutorials/installations/linux.html | 4 ++-- xpu/2.0.110+xpu/tutorials/installations/windows.html | 2 +- .../tutorials/performance_tuning/known_issues.html | 2 +- 44 files changed, 88 insertions(+), 88 deletions(-) diff --git a/cpu/1.11.200+cpu/_sources/tutorials/installation.md.txt b/cpu/1.11.200+cpu/_sources/tutorials/installation.md.txt index ccd4728ff..c2d1fc4f9 100644 --- a/cpu/1.11.200+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/1.11.200+cpu/_sources/tutorials/installation.md.txt @@ -61,13 +61,13 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` For pre-built wheel files with oneDNN Graph Compiler, please use the following command to perform the installation. ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us ``` **Note:** For version prior to 1.10.0, please use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -75,7 +75,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a package with a specific version, please run with the following command. ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/1.11.200+cpu/tutorials/installation.html b/cpu/1.11.200+cpu/tutorials/installation.html index 4545d6842..6fee9a0da 100644 --- a/cpu/1.11.200+cpu/tutorials/installation.html +++ b/cpu/1.11.200+cpu/tutorials/installation.html @@ -242,17 +242,17 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

For pre-built wheel files with oneDNN Graph Compiler, please use the following command to perform the installation.

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us
 

Note: For version prior to 1.10.0, please use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a package with a specific version, please run with the following command.

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/1.12.0+cpu/_sources/tutorials/installation.md.txt b/cpu/1.12.0+cpu/_sources/tutorials/installation.md.txt index 1a3ace92d..024436281 100644 --- a/cpu/1.12.0+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/1.12.0+cpu/_sources/tutorials/installation.md.txt @@ -63,13 +63,13 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation. ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -77,7 +77,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/1.12.0+cpu/tutorials/installation.html b/cpu/1.12.0+cpu/tutorials/installation.html index ae75213da..c15d5f267 100644 --- a/cpu/1.12.0+cpu/tutorials/installation.html +++ b/cpu/1.12.0+cpu/tutorials/installation.html @@ -254,17 +254,17 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation.

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/1.12.100+cpu/_sources/tutorials/installation.md.txt b/cpu/1.12.100+cpu/_sources/tutorials/installation.md.txt index 4453f67d0..ab99716ab 100644 --- a/cpu/1.12.100+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/1.12.100+cpu/_sources/tutorials/installation.md.txt @@ -64,13 +64,13 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation. ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -78,7 +78,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/1.12.100+cpu/tutorials/installation.html b/cpu/1.12.100+cpu/tutorials/installation.html index 5b6958fb7..a810af9b9 100644 --- a/cpu/1.12.100+cpu/tutorials/installation.html +++ b/cpu/1.12.100+cpu/tutorials/installation.html @@ -262,17 +262,17 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation.

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/1.12.300+cpu/_sources/tutorials/installation.md.txt b/cpu/1.12.300+cpu/_sources/tutorials/installation.md.txt index 9ced63c52..20738eb12 100644 --- a/cpu/1.12.300+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/1.12.300+cpu/_sources/tutorials/installation.md.txt @@ -67,13 +67,13 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation. ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -81,7 +81,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/1.12.300+cpu/tutorials/installation.html b/cpu/1.12.300+cpu/tutorials/installation.html index 8249af05c..742854092 100644 --- a/cpu/1.12.300+cpu/tutorials/installation.html +++ b/cpu/1.12.300+cpu/tutorials/installation.html @@ -274,17 +274,17 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation.

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-dev
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=dev&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/1.13.0+cpu/_sources/tutorials/installation.md.txt b/cpu/1.13.0+cpu/_sources/tutorials/installation.md.txt index 3abf1c548..01737967c 100644 --- a/cpu/1.13.0+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/1.13.0+cpu/_sources/tutorials/installation.md.txt @@ -69,7 +69,7 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -77,7 +77,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/1.13.0+cpu/tutorials/installation.html b/cpu/1.13.0+cpu/tutorials/installation.html index f5b8bc454..91f5c2e4e 100644 --- a/cpu/1.13.0+cpu/tutorials/installation.html +++ b/cpu/1.13.0+cpu/tutorials/installation.html @@ -286,12 +286,12 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/1.13.100+cpu/_sources/tutorials/installation.md.txt b/cpu/1.13.100+cpu/_sources/tutorials/installation.md.txt index bc01e5b4e..a463ff982 100644 --- a/cpu/1.13.100+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/1.13.100+cpu/_sources/tutorials/installation.md.txt @@ -70,7 +70,7 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -78,7 +78,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a history package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/1.13.100+cpu/tutorials/installation.html b/cpu/1.13.100+cpu/tutorials/installation.html index a96d87066..ea0a2ddcb 100644 --- a/cpu/1.13.100+cpu/tutorials/installation.html +++ b/cpu/1.13.100+cpu/tutorials/installation.html @@ -295,12 +295,12 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a history package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/2.0.0+cpu/_sources/tutorials/installation.md.txt b/cpu/2.0.0+cpu/_sources/tutorials/installation.md.txt index fa417bd7a..2bade9c51 100644 --- a/cpu/2.0.0+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/2.0.0+cpu/_sources/tutorials/installation.md.txt @@ -72,7 +72,7 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -80,7 +80,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a history package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/2.0.0+cpu/tutorials/installation.html b/cpu/2.0.0+cpu/tutorials/installation.html index a9d852674..c6ba5877e 100644 --- a/cpu/2.0.0+cpu/tutorials/installation.html +++ b/cpu/2.0.0+cpu/tutorials/installation.html @@ -320,12 +320,12 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a history package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/cpu/2.0.100+cpu/_sources/tutorials/installation.md.txt b/cpu/2.0.100+cpu/_sources/tutorials/installation.md.txt index 2df7f781c..2b1b98b73 100644 --- a/cpu/2.0.100+cpu/_sources/tutorials/installation.md.txt +++ b/cpu/2.0.100+cpu/_sources/tutorials/installation.md.txt @@ -73,7 +73,7 @@ python -m pip install intel_extension_for_pytorch Alternatively, you can also install the latest version with the following commands: ``` -python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` **Note:** For versions before 1.10.0, use package name `torch_ipex`, rather than `intel_extension_for_pytorch`. @@ -81,7 +81,7 @@ python -m pip install intel_extension_for_pytorch -f https://developer.intel.com **Note:** To install a history package with a specific version, run with the following command: ``` -python -m pip install == -f https://developer.intel.com/ipex-whl-stable-cpu +python -m pip install == -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us ``` ## Install via source compilation diff --git a/cpu/2.0.100+cpu/tutorials/installation.html b/cpu/2.0.100+cpu/tutorials/installation.html index df23bf044..a5ad2c60b 100644 --- a/cpu/2.0.100+cpu/tutorials/installation.html +++ b/cpu/2.0.100+cpu/tutorials/installation.html @@ -329,12 +329,12 @@

Install via wheel file

Alternatively, you can also install the latest version with the following commands:

-
python -m pip install intel_extension_for_pytorch -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install intel_extension_for_pytorch -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a history package with a specific version, run with the following command:

-
python -m pip install <package_name>==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
+
python -m pip install <package_name>==<version_name> -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 
diff --git a/xpu/1.10.200+gpu/_sources/tutorials/installation.md.txt b/xpu/1.10.200+gpu/_sources/tutorials/installation.md.txt index 315da90b6..936366432 100644 --- a/xpu/1.10.200+gpu/_sources/tutorials/installation.md.txt +++ b/xpu/1.10.200+gpu/_sources/tutorials/installation.md.txt @@ -59,7 +59,7 @@ Prebuilt wheel files availability matrix for Python versions: ### Install PyTorch ```bash -python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu +python -m pip install torch==1.10.0a0 -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us ``` ### Install Numpy @@ -87,7 +87,7 @@ For torchaudio installation, please follow the [instructions](https://github.com ### Install Intel® Extension for PyTorch\* ```bash -python -m pip install intel_extension_for_pytorch==1.10.200+gpu -f https://developer.intel.com/ipex-whl-stable-xpu +python -m pip install intel_extension_for_pytorch==1.10.200+gpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us ``` ## Install via compiling from source diff --git a/xpu/1.10.200+gpu/tutorials/installation.html b/xpu/1.10.200+gpu/tutorials/installation.html index d71511a4d..e270530d1 100644 --- a/xpu/1.10.200+gpu/tutorials/installation.html +++ b/xpu/1.10.200+gpu/tutorials/installation.html @@ -209,7 +209,7 @@

Install via wheel files

Install PyTorch

-
python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu
+
python -m pip install torch==1.10.0a0 -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
 
@@ -233,7 +233,7 @@

Install torchvision and torchaudio (Optional)

Install Intel® Extension for PyTorch*

-
python -m pip install intel_extension_for_pytorch==1.10.200+gpu -f https://developer.intel.com/ipex-whl-stable-xpu
+
python -m pip install intel_extension_for_pytorch==1.10.200+gpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
 
diff --git a/xpu/1.13.10+xpu/_sources/tutorials/features/DDP.md.txt b/xpu/1.13.10+xpu/_sources/tutorials/features/DDP.md.txt index d5d116f51..0ddd070b8 100644 --- a/xpu/1.13.10+xpu/_sources/tutorials/features/DDP.md.txt +++ b/xpu/1.13.10+xpu/_sources/tutorials/features/DDP.md.txt @@ -46,11 +46,11 @@ Prebuilt wheel files for CPU, GPU with generic Python\* and GPU with Intel® Dis ``` # Generic Python* for CPU -REPO_URL: https://developer.intel.com/ipex-whl-stable-cpu +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us # Generic Python* for GPU -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us # Intel® Distribution for Python* -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us ``` Installation from either repository shares the command below. Replace the place holder `` with a real URL mentioned above. diff --git a/xpu/1.13.10+xpu/_sources/tutorials/getting_started.md.txt b/xpu/1.13.10+xpu/_sources/tutorials/getting_started.md.txt index 008db07c7..6aeafc9cf 100644 --- a/xpu/1.13.10+xpu/_sources/tutorials/getting_started.md.txt +++ b/xpu/1.13.10+xpu/_sources/tutorials/getting_started.md.txt @@ -5,7 +5,7 @@ Prebuilt wheel files are released for multiple Python versions. You can install them simply with the following pip command. ```bash -python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu +python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us ``` You can run a simple sanity test to double confirm if the correct version is installed, and if the software stack can get correct hardware information onboard your system. diff --git a/xpu/1.13.10+xpu/_sources/tutorials/installation.md.txt b/xpu/1.13.10+xpu/_sources/tutorials/installation.md.txt index 685a78f78..2e9d27c6a 100644 --- a/xpu/1.13.10+xpu/_sources/tutorials/installation.md.txt +++ b/xpu/1.13.10+xpu/_sources/tutorials/installation.md.txt @@ -82,10 +82,10 @@ Prebuilt wheel files for generic Python\* and Intel® Distribution for Python\* ```bash # General Python* -python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu +python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us # Intel® Distribution for Python* -python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp +python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us ``` **Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9. The support starts from 1.13.10+xpu. diff --git a/xpu/1.13.10+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt b/xpu/1.13.10+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt index 04e4f9504..7831eaf59 100644 --- a/xpu/1.13.10+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt +++ b/xpu/1.13.10+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt @@ -13,7 +13,7 @@ Known Issues ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev ``` - DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue. + DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us) to avoid this issue. - Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL diff --git a/xpu/1.13.10+xpu/tutorials/features/DDP.html b/xpu/1.13.10+xpu/tutorials/features/DDP.html index 1eba2661a..aadd7be0c 100644 --- a/xpu/1.13.10+xpu/tutorials/features/DDP.html +++ b/xpu/1.13.10+xpu/tutorials/features/DDP.html @@ -157,11 +157,11 @@

Install from source:Install from prebuilt wheel:

Prebuilt wheel files for CPU, GPU with generic Python* and GPU with Intel® Distribution for Python* are released in separate repositories.

# Generic Python* for CPU
-REPO_URL: https://developer.intel.com/ipex-whl-stable-cpu
+REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
 # Generic Python* for GPU
-REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu
+REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
 # Intel® Distribution for Python*
-REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp
+REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
 

Installation from either repository shares the command below. Replace the place holder <REPO_URL> with a real URL mentioned above.

diff --git a/xpu/1.13.10+xpu/tutorials/getting_started.html b/xpu/1.13.10+xpu/tutorials/getting_started.html index 6d21466b7..3c6f98091 100644 --- a/xpu/1.13.10+xpu/tutorials/getting_started.html +++ b/xpu/1.13.10+xpu/tutorials/getting_started.html @@ -92,7 +92,7 @@

Getting Started

Installation

Prebuilt wheel files are released for multiple Python versions. You can install them simply with the following pip command.

-
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
+
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
 

You can run a simple sanity test to double confirm if the correct version is installed, and if the software stack can get correct hardware information onboard your system.

diff --git a/xpu/1.13.10+xpu/tutorials/installation.html b/xpu/1.13.10+xpu/tutorials/installation.html index bdd3374e6..21da2c8b7 100644 --- a/xpu/1.13.10+xpu/tutorials/installation.html +++ b/xpu/1.13.10+xpu/tutorials/installation.html @@ -262,10 +262,10 @@

Install via wheel files

Prebuilt wheel files for generic Python* and Intel® Distribution for Python* are released in separate repositories.

# General Python*
-python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
+python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
 
 # Intel® Distribution for Python*
-python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp
+python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
 

Note: Wheel files for Intel® Distribution for Python* only supports Python 3.9. The support starts from 1.13.10+xpu.

diff --git a/xpu/1.13.10+xpu/tutorials/performance_tuning/known_issues.html b/xpu/1.13.10+xpu/tutorials/performance_tuning/known_issues.html index 10b02dc24..c6f184c04 100644 --- a/xpu/1.13.10+xpu/tutorials/performance_tuning/known_issues.html +++ b/xpu/1.13.10+xpu/tutorials/performance_tuning/known_issues.html @@ -105,7 +105,7 @@

Known Issues in GPU-Specific
ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
 

-

DPC++ does not support _GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch* is always compiled with _GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch* is compiled with _GLIBCXX_USE_CXX11_ABI=0. Pass export GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch* with particular compiler which supports _GLIBCXX_USE_CXX11_ABI=1. We recommend using prebuilt wheels in download server to avoid this issue.

+

DPC++ does not support _GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch* is always compiled with _GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch* is compiled with _GLIBCXX_USE_CXX11_ABI=0. Pass export GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch* with particular compiler which supports _GLIBCXX_USE_CXX11_ABI=1. We recommend using prebuilt wheels in download server to avoid this issue.

  • Can’t find oneMKL library when build Intel® Extension for PyTorch* without oneMKL

    /usr/bin/ld: cannot find -lmkl_sycl
    diff --git a/xpu/1.13.120+xpu/_sources/tutorials/features/DDP.md.txt b/xpu/1.13.120+xpu/_sources/tutorials/features/DDP.md.txt
    index 56d7496a9..3c96b15a9 100644
    --- a/xpu/1.13.120+xpu/_sources/tutorials/features/DDP.md.txt
    +++ b/xpu/1.13.120+xpu/_sources/tutorials/features/DDP.md.txt
    @@ -62,11 +62,11 @@ Prebuilt wheel files for CPU, GPU with generic Python\* and GPU with Intel® Dis
     
     ```
     # Generic Python* for CPU
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-cpu
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
     # Generic Python* for GPU
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     # Intel® Distribution for Python*
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
     ```
     
     Installation from either repository shares the command below. Replace the place holder `` with a real URL mentioned above.
    diff --git a/xpu/1.13.120+xpu/_sources/tutorials/getting_started.md.txt b/xpu/1.13.120+xpu/_sources/tutorials/getting_started.md.txt
    index 4a4487327..cc52f5f2f 100644
    --- a/xpu/1.13.120+xpu/_sources/tutorials/getting_started.md.txt
    +++ b/xpu/1.13.120+xpu/_sources/tutorials/getting_started.md.txt
    @@ -5,7 +5,7 @@
     Prebuilt wheel files are released for multiple Python versions. You can install them simply with the following pip command.
     
     ```bash
    -python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
    +python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     ```
     
     You can run a simple sanity test to double confirm if the correct version is installed, and if the software stack can get correct hardware information onboard your system.
    diff --git a/xpu/1.13.120+xpu/_sources/tutorials/installation.md.txt b/xpu/1.13.120+xpu/_sources/tutorials/installation.md.txt
    index 6f4d09bc2..15da1ced7 100644
    --- a/xpu/1.13.120+xpu/_sources/tutorials/installation.md.txt
    +++ b/xpu/1.13.120+xpu/_sources/tutorials/installation.md.txt
    @@ -93,10 +93,10 @@ Prebuilt wheel files for generic Python\* and Intel® Distribution for Python\*
     
     ```bash
     # General Python*
    -python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
    +python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     
     # Intel® Distribution for Python*
    -python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp
    +python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
     ```
     
     **Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9. The support starts from 1.13.10+xpu.
    diff --git a/xpu/1.13.120+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt b/xpu/1.13.120+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt
    index ab4079267..09ce6a1f5 100644
    --- a/xpu/1.13.120+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt
    +++ b/xpu/1.13.120+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt
    @@ -29,7 +29,7 @@ Known Issues
       ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
       ```
     
    -  DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
    +  DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us) to avoid this issue.
     
     ### Dependency Libraries
     
    diff --git a/xpu/1.13.120+xpu/tutorials/features/DDP.html b/xpu/1.13.120+xpu/tutorials/features/DDP.html
    index c82fb9186..7057737ae 100644
    --- a/xpu/1.13.120+xpu/tutorials/features/DDP.html
    +++ b/xpu/1.13.120+xpu/tutorials/features/DDP.html
    @@ -170,11 +170,11 @@ 

    Install from source:Install from prebuilt wheel:

    Prebuilt wheel files for CPU, GPU with generic Python* and GPU with Intel® Distribution for Python* are released in separate repositories.

    # Generic Python* for CPU
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-cpu
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
     # Generic Python* for GPU
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     # Intel® Distribution for Python*
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
     

    Installation from either repository shares the command below. Replace the place holder <REPO_URL> with a real URL mentioned above.

    diff --git a/xpu/1.13.120+xpu/tutorials/getting_started.html b/xpu/1.13.120+xpu/tutorials/getting_started.html index 24c25dba8..cf0ee799a 100644 --- a/xpu/1.13.120+xpu/tutorials/getting_started.html +++ b/xpu/1.13.120+xpu/tutorials/getting_started.html @@ -91,7 +91,7 @@

    Getting Started

    Installation

    Prebuilt wheel files are released for multiple Python versions. You can install them simply with the following pip command.

    -
    python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
    +
    python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     

    You can run a simple sanity test to double confirm if the correct version is installed, and if the software stack can get correct hardware information onboard your system.

    diff --git a/xpu/1.13.120+xpu/tutorials/installation.html b/xpu/1.13.120+xpu/tutorials/installation.html index fb58ddadc..6ba101a34 100644 --- a/xpu/1.13.120+xpu/tutorials/installation.html +++ b/xpu/1.13.120+xpu/tutorials/installation.html @@ -275,10 +275,10 @@

    Install via wheel files

    Prebuilt wheel files for generic Python* and Intel® Distribution for Python* are released in separate repositories.

    # General Python*
    -python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
    +python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     
     # Intel® Distribution for Python*
    -python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp
    +python -m pip install torch==1.13.0a0+git6c9b55e torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.120+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
     

    Note: Wheel files for Intel® Distribution for Python* only supports Python 3.9. The support starts from 1.13.10+xpu.

    diff --git a/xpu/1.13.120+xpu/tutorials/performance_tuning/known_issues.html b/xpu/1.13.120+xpu/tutorials/performance_tuning/known_issues.html index 29181b307..561d353d5 100644 --- a/xpu/1.13.120+xpu/tutorials/performance_tuning/known_issues.html +++ b/xpu/1.13.120+xpu/tutorials/performance_tuning/known_issues.html @@ -133,7 +133,7 @@

    Usage
    ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
     
    -

    DPC++ does not support _GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch* is always compiled with _GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch* is compiled with _GLIBCXX_USE_CXX11_ABI=0. Pass export GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch* with particular compiler which supports _GLIBCXX_USE_CXX11_ABI=1. We recommend using prebuilt wheels in download server to avoid this issue.

    +

    DPC++ does not support _GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch* is always compiled with _GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch* is compiled with _GLIBCXX_USE_CXX11_ABI=0. Pass export GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch* with particular compiler which supports _GLIBCXX_USE_CXX11_ABI=1. We recommend using prebuilt wheels in download server to avoid this issue.

  • diff --git a/xpu/2.0.110+xpu/_sources/tutorials/cheat_sheet.md.txt b/xpu/2.0.110+xpu/_sources/tutorials/cheat_sheet.md.txt index f7423901b..f7fc8cabe 100644 --- a/xpu/2.0.110+xpu/_sources/tutorials/cheat_sheet.md.txt +++ b/xpu/2.0.110+xpu/_sources/tutorials/cheat_sheet.md.txt @@ -6,7 +6,7 @@ Get started with Intel® Extension for PyTorch\* using the following commands: |Description | Command | | -------- | ------- | | Basic CPU Installation | `python -m pip install intel_extension_for_pytorch` | -| Basic GPU Installation | `pip install torch== -f https://developer.intel.com/ipex-whl-stable-xpu`
    `pip install intel_extension_for_pytorch== -f https://developer.intel.com/ipex-whl-stable-xpu`| +| Basic GPU Installation | `pip install torch== -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us`
    `pip install intel_extension_for_pytorch== -f https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us-xpu`| | Import Intel® Extension for PyTorch\* | `import intel_extension_for_pytorch as ipex`| | Capture a Verbose Log (Command Prompt) | `export ONEDNN_VERBOSE=1` | | Optimization During Training | `model = ...`
    `optimizer = ...`
    `model.train()`
    `model, optimizer = ipex.optimize(model, optimizer=optimizer)`| diff --git a/xpu/2.0.110+xpu/_sources/tutorials/features/DDP.md.txt b/xpu/2.0.110+xpu/_sources/tutorials/features/DDP.md.txt index 26bbc02f7..0879c4b8e 100644 --- a/xpu/2.0.110+xpu/_sources/tutorials/features/DDP.md.txt +++ b/xpu/2.0.110+xpu/_sources/tutorials/features/DDP.md.txt @@ -62,11 +62,11 @@ Prebuilt wheel files for CPU, GPU with generic Python\* and GPU with Intel® Dis ``` # Generic Python* for CPU -REPO_URL: https://developer.intel.com/ipex-whl-stable-cpu +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us # Generic Python* for GPU -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us # Intel® Distribution for Python* -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us ``` Installation from either repository shares the command below. Replace the place holder `` with a real URL mentioned above. diff --git a/xpu/2.0.110+xpu/_sources/tutorials/installations/linux.rst.txt b/xpu/2.0.110+xpu/_sources/tutorials/installations/linux.rst.txt index 4b94db494..bbd7f3951 100644 --- a/xpu/2.0.110+xpu/_sources/tutorials/installations/linux.rst.txt +++ b/xpu/2.0.110+xpu/_sources/tutorials/installations/linux.rst.txt @@ -139,7 +139,7 @@ Prebuilt wheel files availability matrix for Python versions: .. code:: shell - python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu + python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us .. note:: @@ -152,7 +152,7 @@ Prebuit wheel files only support Python 3.9 for Intel® Distribution for Python* .. code:: shell - python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp + python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us via conda command ~~~~~~~~~~~~~~~~~ diff --git a/xpu/2.0.110+xpu/_sources/tutorials/installations/windows.rst.txt b/xpu/2.0.110+xpu/_sources/tutorials/installations/windows.rst.txt index e888dd0d0..18dd914fe 100644 --- a/xpu/2.0.110+xpu/_sources/tutorials/installations/windows.rst.txt +++ b/xpu/2.0.110+xpu/_sources/tutorials/installations/windows.rst.txt @@ -108,7 +108,7 @@ Prebuilt wheel files availability matrix for Python versions: .. code:: shell conda install pkg-config libuv - python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu + python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us Important Notes ~~~~~~~~~~~~~~~ diff --git a/xpu/2.0.110+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt b/xpu/2.0.110+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt index 8ca9b89fd..253f977a9 100644 --- a/xpu/2.0.110+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt +++ b/xpu/2.0.110+xpu/_sources/tutorials/performance_tuning/known_issues.md.txt @@ -23,7 +23,7 @@ Known Issues ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev ``` - DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue. + DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us) to avoid this issue. - Bad termination after AI model execution finishes when using Intel MPI diff --git a/xpu/2.0.110+xpu/tutorials/cheat_sheet.html b/xpu/2.0.110+xpu/tutorials/cheat_sheet.html index b1cd5fb68..9ade65c11 100644 --- a/xpu/2.0.110+xpu/tutorials/cheat_sheet.html +++ b/xpu/2.0.110+xpu/tutorials/cheat_sheet.html @@ -103,7 +103,7 @@

    Cheat SheetInstall from prebuilt wheel:

    Prebuilt wheel files for CPU, GPU with generic Python* and GPU with Intel® Distribution for Python* are released in separate repositories.

    # Generic Python* for CPU
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-cpu
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=cpu&repo=us
     # Generic Python* for GPU
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     # Intel® Distribution for Python*
    -REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp
    +REPO_URL: https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
     

    Installation from either repository shares the command below. Replace the place holder <REPO_URL> with a real URL mentioned above.

    diff --git a/xpu/2.0.110+xpu/tutorials/installations/linux.html b/xpu/2.0.110+xpu/tutorials/installations/linux.html index 3809db436..a57877311 100644 --- a/xpu/2.0.110+xpu/tutorials/installations/linux.html +++ b/xpu/2.0.110+xpu/tutorials/installations/linux.html @@ -293,7 +293,7 @@

    Generic Python
    python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
    +
    python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     
    @@ -304,7 +304,7 @@

    Generic Python

    Intel® Distribution for Python*

    Prebuit wheel files only support Python 3.9 for Intel® Distribution for Python* environment. Supported version starts from 1.13.10+xpu.

    -
    python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp
    +
    python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu-idp&repo=us
     
    diff --git a/xpu/2.0.110+xpu/tutorials/installations/windows.html b/xpu/2.0.110+xpu/tutorials/installations/windows.html index 3634d791d..908ee49da 100644 --- a/xpu/2.0.110+xpu/tutorials/installations/windows.html +++ b/xpu/2.0.110+xpu/tutorials/installations/windows.html @@ -242,7 +242,7 @@

    Generic Python
    conda install pkg-config libuv
    -python -m pip install torch==2.0.0a0 intel_extension_for_pytorch==2.0.110+gitba7f6c1 -f https://developer.intel.com/ipex-whl-stable-xpu
    +python -m pip install torch==2.0.0a0 intel_extension_for_pytorch==2.0.110+gitba7f6c1 -f https://pytorch-extension.intel.com/release-whl?release=stable&device=xpu&repo=us
     

    diff --git a/xpu/2.0.110+xpu/tutorials/performance_tuning/known_issues.html b/xpu/2.0.110+xpu/tutorials/performance_tuning/known_issues.html index b65e3d162..28795341c 100644 --- a/xpu/2.0.110+xpu/tutorials/performance_tuning/known_issues.html +++ b/xpu/2.0.110+xpu/tutorials/performance_tuning/known_issues.html @@ -132,7 +132,7 @@

    Usage
    ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
     
    -

    DPC++ does not support _GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch* is always compiled with _GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch* is compiled with _GLIBCXX_USE_CXX11_ABI=0. Pass export GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch* with particular compiler which supports _GLIBCXX_USE_CXX11_ABI=1. We recommend using prebuilt wheels in download server to avoid this issue.

    +

    DPC++ does not support _GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch* is always compiled with _GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch* is compiled with _GLIBCXX_USE_CXX11_ABI=0. Pass export GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch* with particular compiler which supports _GLIBCXX_USE_CXX11_ABI=1. We recommend using prebuilt wheels in download server to avoid this issue.

  • Bad termination after AI model execution finishes when using Intel MPI

    This is a random issue when the AI model (e.g. RN50 training) execution finishes in an Intel MPI environment. It is not user-friendly as the model execution ends ungracefully. The workaround solution is to add dist.destroy_process_group() during the cleanup stage in the model script, as described in Getting Started with Distributed Data Parallel.