Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update PyTorch pin #2719

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update PyTorch pin #2719

wants to merge 1 commit into from

Conversation

anmyachev
Copy link
Contributor

@anmyachev anmyachev commented Nov 15, 2024

Signed-off-by: Anatoly Myachev <anatoly.myachev@intel.com>
@anmyachev anmyachev linked an issue Nov 15, 2024 that may be closed by this pull request
Copy link
Contributor

@alexbaden alexbaden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good - can we run inductor tests and also verify that the timing is reasonable for the benchmark results?

@anmyachev
Copy link
Contributor Author

Looks good - can we run inductor tests and also verify that the timing is reasonable for the benchmark results?

Sure, here is https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/11856999355 CI for inductor.

verify that the timing is reasonable for the benchmark results?

Do you mean for tutorials? The separate benchmark workflow doesn't work with the new compiler yet (to be precise elapsed_time doesn't work).

@alexbaden
Copy link
Contributor

Do you mean for tutorials?

Yes, whatever reports out to the dashboards w/ the previous elapsed time hack (probably just tutorials).

@anmyachev
Copy link
Contributor Author

Do you mean for tutorials?

Yes, whatever reports out to the dashboards w/ the previous elapsed time hack (probably just tutorials).

Overall the results look reasonable, but there are places where the numbers change dramatically and this is noticeable, for example for one of the tutorials (and usually it is noticeable on small dimensions). I also believe that since the new implementation of profiling should have less impact on runtime, the increase in performance in some cases should definitely be noticeable.

image

@anmyachev anmyachev marked this pull request as ready for review November 15, 2024 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PyTorch tests failed
2 participants