-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document state of PyTorch upstreaming in the README #231
Comments
hehe I wrote the torch einsum docs regarding using opt_einsum, so let me know where you think the docs could be improved! (feel free to open an issue on torch and cc me) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The README mentions that some of these optimizations already exist upstream in
numpy
, but that you need to passoptimize=True
to access them. This is useful!The docs for
torch.einsum
suggest that it automatically usesopt_einsum
already if available (see also discussion at #205). It would be helpful to also mention that here, and say whether it's necessary to explicitly importopt_einsum
to get this behaviour (I believe no?), potentially also mentioningtorch.backends.opt_einsum.is_available()
andtorch.backends.opt_einsum.enabled
, or anything else that seems relevant / useful. (I think the torch docs could also be improved here, and may submit an issue or PR there, but I think it would be useful to say something here regardless.)Doing something like this for every supported opt_einsum backend might be quite a task, but let's not let perfect be the enemy of good :)
The text was updated successfully, but these errors were encountered: