Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Some templates not properly differentiated with parameter shift on the legacy device #5802

Closed
1 task done
astralcai opened this issue Jun 5, 2024 · 1 comment · Fixed by #5806
Closed
1 task done
Labels
bug 🐛 Something isn't working

Comments

@astralcai
Copy link
Contributor

astralcai commented Jun 5, 2024

Expected behavior

ControlledSequence, Reflection, Qubitization, and AmplitudeAmplification can be differentiated with both default.qubit and default.qubit.legacy.

Actual behavior

Consider the following circuit:

def circuit(x):
    qml.PauliX(2)
    qml.ControlledSequence(qml.RX(x, wires=3), control=[0, 1, 2])
    return qml.probs(wires=range(4))

With default.qubit, it produces the expected results when taking the gradient:

>>> dev = qml.device("default.qubit")
>>> qnode = qml.QNode(circuit, dev, interface="autograd", diff_method="parameter-shift")
>>> x = qml.numpy.array(0.5, requires_grad=True)
>>> qml.jacobian(qnode)(x)
array([ 0.        ,  0.        , -0.23971277,  0.23971277,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ,  0.        ,  0.        ,  0.        ,  0.        ,
        0.        ])

But with default.qubit.legacy,

>>> dev = qml.device("default.qubit.legacy", wires=4)
>>> qnode = qml.QNode(circuit, dev, interface="autograd", diff_method="parameter-shift")
>>> qml.jacobian(qnode)(x)
TypeError: 'NoneType' object is not subscriptable

This error can be fixed quite easily by adding

# check the grad_recipe validity
if self.grad_recipe is None:
    # Make sure grad_recipe is an iterable of correct length instead of None
    self.grad_recipe = [None] * self.num_params

to the end of ControlledSequence.__init__(), but when shots are added:

>>> dev = qml.device("default.qubit.legacy", wires=4, shots=50000)
>>> qnode = qml.QNode(circuit, dev, interface="autograd", diff_method="parameter-shift")
>>> qml.jacobian(qnode)(x)
array([     0.,      0., -26200.,  26200.,      0.,      0.,      0.,
            0.,      0.,      0.,      0.,      0.,      0.,      0.,
            0.,      0.])

We get non-sensible results. This issue also appears for Reflection, Qubitization, and AmplitudeAmplification

Additional information

The issue is due to default.qubit has a preprocess that happens before the ml boundary, decomposing the templates to lower level gates, but with default.qubit.legacy, expansion happens in inner_execute, which means that the ml boundary is dealing with the template operation itself, and they themselves are not properly differentiable.

This bug is blocking #5791

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.37.0.dev0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Location: /Users/astral.cai/Workspace/pennylane/venv/lib/python3.9/site-packages
Editable project location: /Users/astral.cai/Workspace/pennylane
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           macOS-14.5-arm64-arm-64bit
Python version:          3.9.19
Numpy version:           1.26.4
Scipy version:           1.11.4
Installed devices:
- default.clifford (PennyLane-0.37.0.dev0)
- default.gaussian (PennyLane-0.37.0.dev0)
- default.mixed (PennyLane-0.37.0.dev0)
- default.qubit (PennyLane-0.37.0.dev0)
- default.qubit.autograd (PennyLane-0.37.0.dev0)
- default.qubit.jax (PennyLane-0.37.0.dev0)
- default.qubit.legacy (PennyLane-0.37.0.dev0)
- default.qubit.tf (PennyLane-0.37.0.dev0)
- default.qubit.torch (PennyLane-0.37.0.dev0)
- default.qutrit (PennyLane-0.37.0.dev0)
- default.qutrit.mixed (PennyLane-0.37.0.dev0)
- null.qubit (PennyLane-0.37.0.dev0)
- lightning.qubit (PennyLane-Lightning-0.36.0)

Existing GitHub issues

  • I have searched existing GitHub issues to make sure the issue does not already exist.
@dwierichs
Copy link
Contributor

dwierichs commented Jun 6, 2024

but when shots are added [...] We get non-sensible results.

This is a result of finite differences. Some operation in the decomposition will have grad_method="F" ControlledSequence with the fixed grad_recipe is not caught by _param_shift_stopping_condition and thus not decomposed. It then falls back to finite diff instead of parameter shift. The result is a numerically unstable derivative that can be resolved with float precision (shots=None) but not with shots!=None precision (for all shot counts that fit on a laptop).

astralcai added a commit that referenced this issue Jun 6, 2024
…tudeAmplification`, and `Qubitization`. (#5806)

**Context:**
Templates that are not actually supported by `parameter_shift` should
have `grad_method=None` so that they are decomposed by
`_expand_transform_param_shift`

**Description of the Change:**
1. Adds the `data` of components of the templates to the `data` of the
templates such that trainable parameters are tracked
2. Adds `grad_method=None` for `ControlledSequence`, `Reflection`,
`AmplitudeAmplification`, and `Qubitization`.

**Related GitHub Issues:**
Fixes #5802
[sc-64967]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
2 participants