Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test and improve intel support #13

Open
eppane opened this issue Nov 13, 2024 · 1 comment
Open

Test and improve intel support #13

eppane opened this issue Nov 13, 2024 · 1 comment
Labels
backend Updates related to backend enhancement New feature or request

Comments

@eppane
Copy link
Collaborator

eppane commented Nov 13, 2024

Currently TorchServe has support for Intel GPUs:

examples/intel_extension_for_pytorch

  • Support is provided more as an extension rather than being inherent part of TorchServe

  • Nevertheless, TorchServe with Intel® Extension for PyTorch* can deploy any model and do inference.

  • To use TorchServe with Intel GPU, the machine must have the latest oneAPI Base Kit installed, activated, and ipex GPU installed.

  • There is a custom intel_gpu_metric_collector.py provided that needs to be used, rather than the existing metric_collector.py in TorchServe

Action points:

  • Integrate Intel support as an inherent part of TorchServe
  • Integrate the provided custom metric collector script to TorchServe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend Updates related to backend enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant