Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Differentiation does not work with dynamic_one_shot #5736

Closed
1 task done
mudit2812 opened this issue May 23, 2024 · 1 comment · Fixed by #5973
Closed
1 task done

[BUG] Differentiation does not work with dynamic_one_shot #5736

mudit2812 opened this issue May 23, 2024 · 1 comment · Fixed by #5973
Assignees
Labels
bug 🐛 Something isn't working

Comments

@mudit2812
Copy link
Contributor

Expected behavior

I expect to be able to differentiate arbitrary circuits when using the dynamic_one_shot transform.

Actual behavior

I get an error.

Additional information

Originally from this forum discussion

Source code

import pennylane as qml
import torch

dev = qml.device("default.qubit", shots=10)

@qml.qnode(dev, interface='torch') # switch to torch interface
def f(x):
    qml.RX(x, 0) # remove extraneous instructions
    return qml.expval(qml.measure(0)) # REPLACE PauliX with measure

x = torch.tensor(0.4, requires_grad=True) # switch to torch tensor
result = f(x) 
result.backward() # replace with torch gradient computation
x.grad

Tracebacks

/usr/local/lib/python3.10/dist-packages/autoray/autoray.py:81: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  return func(*args, **kwargs)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-4-879285d05025> in <cell line: 13>()
     11 x = torch.tensor(0.4, requires_grad=True) # switch to torch tensor
     12 result = f(x)
---> 13 result.backward() # replace with torch gradient computation
     14 x.grad

1 frames
/usr/local/lib/python3.10/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
    520                 inputs=inputs,
    521             )
--> 522         torch.autograd.backward(
    523             self, gradient, retain_graph, create_graph, inputs=inputs
    524         )

/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    264     # some Python versions print out the first line of a multi-line function
    265     # calls in the traceback and some print out the last line
--> 266     Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    267         tensors,
    268         grad_tensors_,

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

System information

Dev. Using Pennylane branch dos-interfaces

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@mudit2812
Copy link
Contributor Author

Resolved by #5791

mudit2812 added a commit that referenced this issue Jul 26, 2024
**Context:**
As name says. Gradient workflows no longer raise errors after the merge
of #5791 , but their correctness is yet to be verified.

**Description of the Change:**
* Updated casting rules in `dynamic_one_shot`'s processing function for
tensorflow.
* For the changes to be fully integrated, the way that the interface is
passed around when calling a QNode needed to be changed, so the
following changes were made:
* `QNode` has updated behaviour for how `mcm_config` is used during
execution. In `QNode._execution_component`, a copy of
`self.execute_kwargs["mcm_config"]` is the source of truth, and in
`qml.execute`, `config.mcm_config` is the source of truth.
* Added a private `pad-invalid-samples` `postselect_mode`. The
`postselect_mode` is switched to this automatically in `qml.execute` if
executing with jax and shots and `postselect_mode == "hw-like"`. This
way we standardize how the MCM transforms determine if jax is being
used.
  * Updates to `capture` module to accommodate the above changes.

**Benefits:**
* `dynamic_one_shot` doesn't cast to interfaces inside the ML boundary
* `dynamic_one_shot` works with tensorflow
* Expanded tests

**Possible Drawbacks:**

**Related GitHub Issues:**
Fixes #5736, #5710 

Duplicate of #5861 which was closed due to release branch merge stuff.

---------

Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
Co-authored-by: Astral Cai <astral.cai@xanadu.ai>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Yushao Chen (Jerry) <chenys13@outlook.com>
Co-authored-by: Christina Lee <chrissie.c.l@gmail.com>
Co-authored-by: Thomas R. Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: soranjh <40344468+soranjh@users.noreply.github.com>
Co-authored-by: Pietropaolo Frisoni <pietropaolo.frisoni@xanadu.ai>
Co-authored-by: Ahmed Darwish <exclass9.24@gmail.com>
Co-authored-by: Utkarsh <utkarshazad98@gmail.com>
Co-authored-by: David Wierichs <david.wierichs@xanadu.ai>
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Mikhail Andrenkov <mikhail@xanadu.ai>
Co-authored-by: Diego <67476785+DSGuala@users.noreply.github.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Diego <diego_guala@hotmail.com>
Co-authored-by: Vincent Michaud-Rioux <vincentm@nanoacademic.com>
Co-authored-by: lillian542 <38584660+lillian542@users.noreply.github.com>
Co-authored-by: Jack Brown <jack@xanadu.ai>
Co-authored-by: Paul Finlay <50180049+doctorperceptron@users.noreply.github.com>
Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Co-authored-by: Cristian Emiliano Godinez Ramirez <57567043+EmilianoG-byte@users.noreply.github.com>
Co-authored-by: Vincent Michaud-Rioux <vincent.michaud-rioux@xanadu.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
1 participant