Skip to content

Latest commit

 

History

History
78 lines (67 loc) · 3.5 KB

File metadata and controls

78 lines (67 loc) · 3.5 KB

UNet++ Inference

UNet++ Inference best known configurations with Intel® Extension for PyTorch.

Model Information

Use Case Framework Model Repo Branch/Commit/Tag Optional Patch
Inference PyTorch https://github.com/qubvel/segmentation_models.pytorch - -

Pre-Requisite

Inference

  1. git clone https://github.com/IntelAI/models.git
  2. cd models/models_v2/pytorch/unetpp/inference/gpu
  3. Create virtual environment venv and activate it:
    python3 -m venv venv
    . ./venv/bin/activate
    
  4. Run setup.sh
    ./setup.sh
    
  5. Install the latest GPU versions of torch, torchvision and intel_extension_for_pytorch:
    python -m pip install torch==<torch_version> torchvision==<torchvision_version> intel-extension-for-pytorch==<ipex_version> --extra-index-url https://pytorch-extension.intel.com/release-whl-aitools/
    
  6. Set environment variables for Intel® oneAPI Base Toolkit: Default installation location {ONEAPI_ROOT} is /opt/intel/oneapi for root account, ${HOME}/intel/oneapi for other accounts
    source {ONEAPI_ROOT}/compiler/latest/env/vars.sh
    source {ONEAPI_ROOT}/mkl/latest/env/vars.sh
    source {ONEAPI_ROOT}/tbb/latest/env/vars.sh
    source {ONEAPI_ROOT}/mpi/latest/env/vars.sh
    source {ONEAPI_ROOT}/ccl/latest/env/vars.sh
  7. Setup required environment paramaters
Parameter export command
MULTI_TILE export MULTI_TILE=False (False)
PLATFORM export PLATFORM=Flex (Flex)
OUTPUT_DIR export OUTPUT_DIR=$PWD
BATCH_SIZE (optional) export BATCH_SIZE=8
PRECISION (optional) export PRECISION=fp16
  1. Run run_model.sh

Output

Single-tile output will typically looks like:

Latency: 0.03823380470275879
Throughput: 209.23891990855813

Final results of the inference run can be found in results.yaml file.

results:
 - key: throughput
   value: 209.23892
   unit: fps
 - key: latency
   value: 0.03823380470275879
   unit: s
 - key: accuracy
   value: None
   unit: Acc