Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document state of PyTorch upstreaming in the README #231

Open
bmillwood opened this issue May 16, 2024 · 2 comments
Open

Document state of PyTorch upstreaming in the README #231

bmillwood opened this issue May 16, 2024 · 2 comments

Comments

@bmillwood
Copy link

bmillwood commented May 16, 2024

The README mentions that some of these optimizations already exist upstream in numpy, but that you need to pass optimize=True to access them. This is useful!

The docs for torch.einsum suggest that it automatically uses opt_einsum already if available (see also discussion at #205). It would be helpful to also mention that here, and say whether it's necessary to explicitly import opt_einsum to get this behaviour (I believe no?), potentially also mentioning torch.backends.opt_einsum.is_available() and torch.backends.opt_einsum.enabled, or anything else that seems relevant / useful. (I think the torch docs could also be improved here, and may submit an issue or PR there, but I think it would be useful to say something here regardless.)

Doing something like this for every supported opt_einsum backend might be quite a task, but let's not let perfect be the enemy of good :)

@janeyx99
Copy link
Contributor

hehe I wrote the torch einsum docs regarding using opt_einsum, so let me know where you think the docs could be improved! (feel free to open an issue on torch and cc me)

@bmillwood
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants