Skip to content

Commit

Permalink
Fix jacobian shape in VJP for measurements with shape and batched inp…
Browse files Browse the repository at this point in the history
…uts (#5986)

**Context:**
This PR should fix a bug related to computing the gradient of a circuit
which supports:
1. batching in non-trainable data
2. Measurement with a shape (i.e. probs)
3. More than one trainable parameter in the circuit

Localised example that fails, without the fix:
```
@qml.qnode(qml.device('default.qubit'), diff_method="parameter-shift")
def circuit(x, data):
    qml.RX(x[0], 0)
    qml.RX(x[1], 0)
    qml.RY(data, 0)
    return qml.probs(wires=0)

x = qml.numpy.array([0.5, 0.8], requires_grad=True)
data = qml.numpy.array([1.2, 2.3, 3.4], requires_grad=False)
circuit(x, data)
qml.jacobian(circuit)(x, data)
```
Reshaping the jacobian, when computing the VJP does resolve the issue.

**Description of the Change:**
Jacobian is reshaped according to the shape of dy when computing the VJP
for measurements with dimension.

**Benefits:**
One less bug :)

**Possible Drawbacks:**
None

**Related GitHub Issues:**
This PR fixes #5979

---------

Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: David Wierichs <david.wierichs@xanadu.ai>
Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>
  • Loading branch information
4 people authored Jul 31, 2024
1 parent 111f896 commit 2892a9a
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 3 deletions.
7 changes: 5 additions & 2 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,22 +259,25 @@
* `qml.AmplitudeEmbedding` has better support for features using low precision integer data types.
[(#5969)](https://github.com/PennyLaneAI/pennylane/pull/5969)

* Jacobian shape is fixed for measurements with dimension in `qml.gradients.vjp.compute_vjp_single`.
[(5986)](https://github.com/PennyLaneAI/pennylane/pull/5986)

* `qml.lie_closure` works with sums of Paulis.
[(#6023)](https://github.com/PennyLaneAI/pennylane/pull/6023)


<h3>Contributors ✍️</h3>

This release contains contributions from (in alphabetical order):

Tarun Kumar Allamsetty,
Guillermo Alonso,
Utkarsh Azad,
Gabriel Bottrill,
Ahmed Darwish,
Astral Cai,
Yushao Chen,
Gabriel Bottrill,
Ahmed Darwish,
Maja Franz,
Lillian M. A. Frederiksen,
Pietropaolo Frisoni,
Emiliano Godinez,
Expand Down
2 changes: 1 addition & 1 deletion pennylane/gradients/vjp.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ def compute_vjp_single(dy, jac, num=None):

# Single measurement with dimension e.g. probs
else:
jac = qml.math.stack(jac)
jac = qml.math.reshape(qml.math.stack(jac), (-1, num))
try:
res = jac @ dy_row
except Exception: # pylint: disable=broad-except
Expand Down
21 changes: 21 additions & 0 deletions tests/gradients/core/test_vjp.py
Original file line number Diff line number Diff line change
Expand Up @@ -676,3 +676,24 @@ def test_reduction_extend(self):
# list will correspond to a single input parameter of the combined
# tapes.
assert len(res) == sum(len(t.trainable_params) for t in tapes)

def test_batched_params_probs_jacobian(self):
"""Test that the VJP gets calculated correctly when inputs are batched, multiple
trainable parameters are used and the measurement has a shape (probs)"""
data = np.array([1.2, 2.3, 3.4])
x0, x1 = 0.5, 0.8
ops = [qml.RX(x0, 0), qml.RX(x1, 0), qml.RY(data, 0)]
tape = qml.tape.QuantumScript(ops, [qml.probs(wires=0)], trainable_params=[0, 1])
dy = np.array([[0.6, -0.7], [0.2, -0.7], [-5.2, 0.6]])
v_tapes, fn = qml.gradients.batch_vjp([tape], [dy], qml.gradients.param_shift)

dev = qml.device("default.qubit")
vjp = fn(dev.execute(v_tapes))

# Analytically expected Jacobian and VJP
expected_jac = [-0.5 * np.cos(data) * np.sin(x0 + x1), 0.5 * np.cos(data) * np.sin(x0 + x1)]
expected_vjp = np.tensordot(expected_jac, dy, axes=[[0, 1], [1, 0]])
assert qml.math.shape(vjp) == (1, 2) # num tapes, num trainable tape parameters
assert np.allclose(
vjp, expected_vjp
) # Both parameters essentially feed into the same RX rotation

0 comments on commit 2892a9a

Please sign in to comment.