Skip to content

Commit

Permalink
Adopt rc2 wheels (#4756)
Browse files Browse the repository at this point in the history
  • Loading branch information
sramakintel authored Sep 6, 2024
1 parent 8d4670e commit fb5fce5
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 35 deletions.
17 changes: 1 addition & 16 deletions docker/Dockerfile.prebuilt
Original file line number Diff line number Diff line change
Expand Up @@ -58,22 +58,7 @@ RUN apt-get update && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

RUN wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB \
| gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null && \
echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" \
| tee /etc/apt/sources.list.d/oneAPI.list

ARG DPCPP_VER
ARG MKL_VER
ARG CCL_VER

RUN apt-get update && \
apt-get install -y --no-install-recommends \
intel-oneapi-runtime-dpcpp-cpp=${DPCPP_VER} \
intel-oneapi-runtime-mkl=${MKL_VER} \
intel-oneapi-runtime-ccl=${CCL_VER} && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV OCL_ICD_VENDORS=/etc/OpenCL/vendors

ARG PYTHON
RUN apt-get update && apt install -y software-properties-common
Expand Down
47 changes: 31 additions & 16 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,18 +33,20 @@ Alternatively, `./build.sh` script has docker build command to install prebuilt
To pull docker images use the following command:

```bash
docker pull intel/intel-extension-for-pytorch:xpu
docker pull intel/intel-extension-for-pytorch:2.3.110-xpu
```

### Running container:

Run the following commands to start Intel® Extension for PyTorch\* GPU container. You can use `-v` option to mount your
local directory into the container. The `-v` argument can be omitted if you do not need
access to a local directory in the container. Pass the video and render groups to your
docker container so that the GPU is accessible.

```bash
IMAGE_NAME=intel/intel-extension-for-pytorch:2.3.110-xpu
```
IMAGE_NAME=intel/intel-extension-for-pytorch:xpu
```

```bash
docker run --rm \
-v <your-local-dir>:/workspace \
Expand All @@ -58,44 +60,57 @@ docker run --rm \

#### Verify if XPU is accessible from PyTorch:
You are inside the container now. Run the following command to verify XPU is visible to PyTorch:

```bash
python -c "import torch;print(torch.device('xpu'))"
```

Sample output looks like below:
```

```bash
xpu
```

Then, verify that the XPU device is available to Intel® Extension for PyTorch\*:

```bash
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.is_available())"
python -c "import torch;import intel_extension_for_pytorch as ipex;print(torch.xpu.has_xpu())"
```

Sample output looks like below:
```

```bash
True
```

Use the following command to check whether MKL is enabled as default:

```bash
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.has_onemkl())"
python -c "import torch;import intel_extension_for_pytorch as ipex;print(torch.xpu.has_onemkl())"
```

Sample output looks like below:
```

```bash
True
```

Finally, use the following command to show detailed info of detected device:

```bash
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {ipex.xpu.get_device_properties(i)}') for i in range(ipex.xpu.device_count())];"
python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
```

Sample output looks like below:

```bash
2.3.1+cxx11.abi
2.3.110+xpu
[0]: _XpuDeviceProperties(name='Intel(R) Data Center GPU Max 1550', platform_name='Intel(R) Level-Zero', type='gpu', driver_version='1.3.30049', total_memory=65536MB, max_compute_units=448, gpu_eu_count=448, gpu_subslice_count=56, max_work_group_size=1024, max_num_sub_groups=64, sub_group_sizes=[16 32], has_fp16=1, has_fp64=1, has_atomic64=1)
[1]: _XpuDeviceProperties(name='Intel(R) Data Center GPU Max 1550', platform_name='Intel(R) Level-Zero', type='gpu', driver_version='1.3.30049', total_memory=65536MB, max_compute_units=448, gpu_eu_count=448, gpu_subslice_count=56, max_work_group_size=1024, max_num_sub_groups=64, sub_group_sizes=[16 32], has_fp16=1, has_fp64=1, has_atomic64=1)
```
2.1.0.post2+cxx11.abi
2.1.30+xpu
[0]: _DeviceProperties(name='Intel(R) Data Center GPU Max 1550', platform_name='Intel(R) Level-Zero', dev_type='gpu', driver_version='1.3.27642', has_fp64=1, total_memory=65536MB, max_compute_units=448, gpu_eu_count=448)
[1]: _DeviceProperties(name='Intel(R) Data Center GPU Max 1550', platform_name='Intel(R) Level-Zero', dev_type='gpu', driver_version='1.3.27642', has_fp64=1, total_memory=65536MB, max_compute_units=448, gpu_eu_count=448)
```

#### Running your own script

Now you are inside container with Python 3.10, PyTorch, and Intel® Extension for PyTorch\* preinstalled. You can run your own script
to run on Intel GPU.


3 changes: 0 additions & 3 deletions docker/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,6 @@ if [[ ${IMAGE_TYPE} = "xpu" ]]; then
--build-arg LEVEL_ZERO_GPU_VER=1.3.30049.10-950~22.04 \
--build-arg LEVEL_ZERO_VER=1.17.6-950~22.04 \
--build-arg LEVEL_ZERO_DEV_VER=1.17.6-950~22.04 \
--build-arg DPCPP_VER=2024.1.0-963 \
--build-arg MKL_VER=2024.1.0-691 \
--build-arg CCL_VER=2021.12.0-309 \
--build-arg TORCH_VERSION=2.3.1+cxx11.abi \
--build-arg IPEX_VERSION=2.3.110+xpu \
--build-arg TORCHVISION_VERSION=0.18.1+cxx11 \
Expand Down

0 comments on commit fb5fce5

Please sign in to comment.