Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small fixes: 'dp' and memory_limit + tensordot axes order #154

Merged
merged 6 commits into from
Nov 4, 2020

Conversation

jcmgray
Copy link
Collaborator

@jcmgray jcmgray commented Nov 2, 2020

Description

This PR:

  1. Fixes the bug where a too low memory_limit meant the 'dp' optimizer would search forever (infinite loop in dynamic-programming with 'max_input' memory limit #153)
  2. Puts tensordot axes in a canonical order (the order they appear on the first operand), so that performance shouldn't change unexpectedly (Performance difference between essentially the same einsum with dummy index interchanging #143)
  3. Changes an isinstance call to infer_backend (is cleaner? and fixes a rare bug when mixing inputs for e.g. jax compiled contraction)

Status

  • Ready to go

@jcmgray jcmgray changed the title Small fixes: 'dp' and memory_limit axes order Small fixes: 'dp' and memory_limit + tensordot axes order Nov 2, 2020
Copy link
Owner

@dgasmith dgasmith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once tests are patched.

@@ -569,8 +567,11 @@ def _core_contract(operands, contraction_list, backend='auto', evaluate_constant
left_pos.append(input_left.find(s))
right_pos.append(input_right.find(s))

# Construct the axes tuples in a canonical order
axes = tuple(zip(*sorted(zip(left_pos, right_pos))))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If one of the lists is empty the return is (, ) rather than ((,), (,)) causing the Travis failures.

@@ -757,7 +758,7 @@ def __call__(self, *arrays, **kwargs):
try:
# Check if the backend requires special preparation / calling
# but also ignore non-numpy arrays -> assume user wants same type back
if backends.has_backend(backend) and all(isinstance(x, np.ndarray) for x in arrays):
if backends.has_backend(backend) and all(infer_backend(x) == 'numpy' for x in arrays):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleaner, nice.

@codecov
Copy link

codecov bot commented Nov 4, 2020

Codecov Report

Merging #154 into master will increase coverage by 0.00%.
The diff coverage is 100.00%.

Copy link
Owner

@dgasmith dgasmith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@dgasmith dgasmith merged commit 32fa384 into dgasmith:master Nov 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants