Skip to content

Commit

Permalink
Remove import intel_extension_for_pytorch from fused_softmax.py (#…
Browse files Browse the repository at this point in the history
…2278)

Part of #2147

Perf the same:
https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/10928161213
vs
https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/10912513726;
geomean diff: 1% for triton, 2% for xetla

One more CI run:
https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/10928754028
(also looks good)

Signed-off-by: Anatoly Myachev <anatoly.myachev@intel.com>
  • Loading branch information
anmyachev authored Sep 19, 2024
1 parent 4d546ad commit e820f6c
Showing 1 changed file with 0 additions and 3 deletions.
3 changes: 0 additions & 3 deletions benchmarks/triton_kernels_benchmark/fused_softmax.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,6 @@
import triton_kernels_benchmark as benchmark_suit
import xetla_kernel

if benchmark_suit.USE_IPEX_OPTION:
import intel_extension_for_pytorch # type: ignore # noqa: F401


@torch.jit.script
def naive_softmax(x):
Expand Down

0 comments on commit e820f6c

Please sign in to comment.