Skip to content

Commit

Permalink
Add test workflow for legacy opmath (#5435)
Browse files Browse the repository at this point in the history
Currently, the tests only run with the new opmath, so won't know if we
introduced a bug that breaks the legacy opmath behaviour while it is
still in its deprecation cycle.

This PR creates a kwarg, `disable-opmath`, that can be passed when
running the tests, and a corresponding workflow that runs tests with
that kwarg set to True. This can also be used locally, i.e. `python -m
pytest tests/ --disable-opmath=True`

Right now, it runs with CI on the PR, but once the tests are all
passing, the line triggering that will be removed and it will only run
every 3-4 days, in the middle of the night. Then we will add it to the
test matrix (separate PR to modify the plugin test matrix repo).

The changes to .yml files and to the `conftest` files are all about
allowing us to run these additional tests. A few modifications to tests
were made to allow them to pass with both legacy opmath and new opmath.

There is one test currently marked as xfail for new opmath that I would
call a bug - I opened an issue here:
#5512

---------

Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>
Co-authored-by: qottmann <korbinian.kottmann@gmail.com>
  • Loading branch information
3 people authored Apr 17, 2024
1 parent 9a03cce commit a47d9bc
Show file tree
Hide file tree
Showing 34 changed files with 1,545 additions and 60 deletions.
30 changes: 24 additions & 6 deletions .github/workflows/interface-unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,11 @@ on:
required: false
type: string
default: ''
disable_new_opmath:
description: Whether to disable the new op_math or not when running the tests
required: false
type: string
default: "False"

jobs:
setup-ci-load:
Expand Down Expand Up @@ -155,6 +160,7 @@ jobs:
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: torch and not qcut and not finite-diff and not param-shift
requirements_file: ${{ strategy.job-index == 0 && 'torch.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


autograd-tests:
Expand Down Expand Up @@ -186,6 +192,7 @@ jobs:
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: autograd and not qcut and not finite-diff and not param-shift
disable_new_opmath: ${{ inputs.disable_new_opmath }}


tf-tests:
Expand Down Expand Up @@ -221,6 +228,7 @@ jobs:
pytest_additional_args: --splits 3 --group ${{ matrix.group }} --durations-path='.github/workflows/tf_tests_durations.json'
additional_pip_packages: pytest-split
requirements_file: ${{ strategy.job-index == 0 && 'tf.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


jax-tests:
Expand Down Expand Up @@ -256,6 +264,7 @@ jobs:
pytest_additional_args: --splits 5 --group ${{ matrix.group }} --durations-path='.github/workflows/jax_tests_durations.json'
additional_pip_packages: pytest-split
requirements_file: ${{ strategy.job-index == 0 && 'jax.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


core-tests:
Expand Down Expand Up @@ -291,6 +300,7 @@ jobs:
pytest_additional_args: --splits 5 --group ${{ matrix.group }} --durations-path='.github/workflows/core_tests_durations.json'
additional_pip_packages: pytest-split
requirements_file: ${{ strategy.job-index == 0 && 'core.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


all-interfaces-tests:
Expand Down Expand Up @@ -319,10 +329,11 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: true
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: all_interfaces
requirements_file: ${{ strategy.job-index == 0 && 'all_interfaces.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


external-libraries-tests:
Expand Down Expand Up @@ -351,11 +362,13 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: false
# using lightning master does not work for the tests with external libraries
install_pennylane_lightning_master: false
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: external
additional_pip_packages: pyzx pennylane-catalyst matplotlib stim
requirements_file: ${{ strategy.job-index == 0 && 'external.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


qcut-tests:
Expand Down Expand Up @@ -384,10 +397,11 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: true
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: qcut
additional_pip_packages: kahypar==1.1.7 opt_einsum
disable_new_opmath: ${{ inputs.disable_new_opmath }}


qchem-tests:
Expand Down Expand Up @@ -416,10 +430,11 @@ jobs:
install_jax: false
install_tensorflow: false
install_pytorch: false
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: qchem
additional_pip_packages: openfermionpyscf basis-set-exchange
disable_new_opmath: ${{ inputs.disable_new_opmath }}

gradients-tests:
needs:
Expand Down Expand Up @@ -450,9 +465,10 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: true
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: ${{ matrix.config.suite }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


data-tests:
Expand Down Expand Up @@ -481,10 +497,11 @@ jobs:
install_jax: false
install_tensorflow: false
install_pytorch: false
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: data
additional_pip_packages: h5py
disable_new_opmath: ${{ inputs.disable_new_opmath }}


device-tests:
Expand Down Expand Up @@ -529,10 +546,11 @@ jobs:
install_jax: ${{ !contains(matrix.config.skip_interface, 'jax') }}
install_tensorflow: ${{ !contains(matrix.config.skip_interface, 'tf') }}
install_pytorch: ${{ !contains(matrix.config.skip_interface, 'torch') }}
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_test_directory: pennylane/devices/tests
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_additional_args: --device=${{ matrix.config.device }} --shots=${{ matrix.config.shots }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


upload-to-codecov:
Expand Down
16 changes: 16 additions & 0 deletions .github/workflows/legacy_op_math.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: Legacy opmath tests

on:
schedule:
- cron: "0 0 2 * *"
workflow_dispatch:

jobs:
tests:
uses: ./.github/workflows/interface-unit-tests.yml
secrets:
codecov_token: ${{ secrets.CODECOV_TOKEN }}
with:
branch: 'master'
run_lightened_ci: false
disable_new_opmath: "True"
7 changes: 6 additions & 1 deletion .github/workflows/unit-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,11 @@ on:
required: false
type: string
default: ''
disable_new_opmath:
description: Whether to disable the new op_math or not when running the tests
required: false
type: string
default: "False"

jobs:
test:
Expand Down Expand Up @@ -170,7 +175,7 @@ jobs:
COV_CORE_DATAFILE: .coverage.eager
TF_USE_LEGACY_KERAS: "1" # sets to use tf-keras (Keras2) instead of keras (Keras3) when running TF tests
# Calling PyTest by invoking Python first as that adds the current directory to sys.path
run: python -m pytest ${{ inputs.pytest_test_directory }} ${{ steps.pytest_args.outputs.args }} ${{ env.PYTEST_MARKER }}
run: python -m pytest ${{ inputs.pytest_test_directory }} ${{ steps.pytest_args.outputs.args }} ${{ env.PYTEST_MARKER }} --disable-opmath=${{ inputs.disable_new_opmath }}

- name: Adjust coverage file for Codecov
if: inputs.pipeline_mode == 'unit-tests'
Expand Down
14 changes: 14 additions & 0 deletions pennylane/devices/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,20 @@ def pytest_addoption(parser):
metavar="KEY=VAL",
help="Additional device kwargs.",
)
addoption(
"--disable-opmath", action="store", default="False", help="Whether to disable new_opmath"
)


# pylint: disable=eval-used
@pytest.fixture(scope="session", autouse=True)
def disable_opmath_if_requested(request):
"""Check the value of the --disable-opmath option and turn off
if True before running the tests"""
disable_opmath = request.config.getoption("--disable-opmath")
# value from yaml file is a string, convert to boolean
if eval(disable_opmath):
qml.operation.disable_new_opmath()


def pytest_generate_tests(metafunc):
Expand Down
21 changes: 15 additions & 6 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,18 +178,27 @@ def tear_down_thermitian():
# Fixtures for testing under new and old opmath


def pytest_addoption(parser):
parser.addoption(
"--disable-opmath", action="store", default="False", help="Whether to disable new_opmath"
)


# pylint: disable=eval-used
@pytest.fixture(scope="session", autouse=True)
def disable_opmath_if_requested(request):
disable_opmath = request.config.getoption("--disable-opmath")
# value from yaml file is a string, convert to boolean
if eval(disable_opmath):
qml.operation.disable_new_opmath()


@pytest.fixture(scope="function")
def use_legacy_opmath():
with disable_new_opmath_cm() as cm:
yield cm


# @pytest.fixture(scope="function")
# def use_legacy_opmath():
# with disable_new_opmath_cm():
# yield


@pytest.fixture(scope="function")
def use_new_opmath():
with enable_new_opmath_cm() as cm:
Expand Down
22 changes: 21 additions & 1 deletion tests/data/attributes/operator/test_operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,9 @@ def test_value_init(self, obs_in):
"""Test that a DatasetOperator can be value-initialized
from an observable, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

dset_op = DatasetOperator(obs_in)

assert dset_op.info["type_id"] == "operator"
Expand All @@ -95,6 +98,9 @@ def test_value_init(self, obs_in):
def test_bind_init(self, obs_in):
"""Test that DatasetOperator can be initialized from a HDF5 group
that contains a operator attribute."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

bind = DatasetOperator(obs_in).bind

dset_op = DatasetOperator(bind=bind)
Expand Down Expand Up @@ -124,6 +130,9 @@ def test_value_init(self, obs_in):
"""Test that a DatasetOperator can be value-initialized
from an observable, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

dset_op = DatasetOperator(obs_in)

assert dset_op.info["type_id"] == "operator"
Expand All @@ -135,6 +144,9 @@ def test_value_init(self, obs_in):
def test_bind_init(self, obs_in):
"""Test that DatasetOperator can be initialized from a HDF5 group
that contains an operator attribute."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

bind = DatasetOperator(obs_in).bind

dset_op = DatasetOperator(bind=bind)
Expand All @@ -160,6 +172,9 @@ def test_value_init(self, op_in):
"""Test that a DatasetOperator can be value-initialized
from an operator, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(op_in, qml.ops.LinearCombination):
op_in = qml.operation.convert_to_legacy_H(op_in)

dset_op = DatasetOperator(op_in)

assert dset_op.info["type_id"] == "operator"
Expand All @@ -172,7 +187,9 @@ def test_value_init(self, op_in):
def test_value_init_not_supported(self):
"""Test that a ValueError is raised if attempting to serialize an unsupported operator."""

class NotSupported(Operator): # pylint: disable=too-few-public-methods
class NotSupported(
Operator
): # pylint: disable=too-few-public-methods, unnecessary-ellipsis
"""An operator."""

...
Expand All @@ -195,6 +212,9 @@ def test_bind_init(self, op_in):
"""Test that a DatasetOperator can be bind-initialized
from an operator, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(op_in, qml.ops.LinearCombination):
op_in = qml.operation.convert_to_legacy_H(op_in)

bind = DatasetOperator(op_in).bind

dset_op = DatasetOperator(bind=bind)
Expand Down
3 changes: 3 additions & 0 deletions tests/devices/default_qubit/test_default_qubit.py
Original file line number Diff line number Diff line change
Expand Up @@ -2058,6 +2058,9 @@ def test_differentiate_jitted_qnode(self, measurement_func):
"""Test that a jitted qnode can be correctly differentiated"""
import jax

if measurement_func is qml.var and not qml.operation.active_new_opmath():
pytest.skip(reason="Variance for this test circuit not supported with legacy opmath")

dev = DefaultQubit()

def qfunc(x, y):
Expand Down
7 changes: 5 additions & 2 deletions tests/devices/default_qubit/test_default_qubit_tracking.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,8 +255,11 @@ def test_single_expval(mps, expected_exec, expected_shots):
assert dev.tracker.totals["shots"] == 3 * expected_shots


@pytest.mark.xfail # TODO Prod instances are not automatically
@pytest.mark.usefixtures("use_new_opmath")
@pytest.mark.xfail(reason="bug in grouping for tracker with new opmath")
def test_multiple_expval_with_prods():
"""Can be combined with test below once the bug is fixed - there shouldn't
be a difference in behaviour between old and new opmath here"""
mps, expected_exec, expected_shots = (
[qml.expval(qml.PauliX(0)), qml.expval(qml.PauliX(0) @ qml.PauliY(1))],
1,
Expand All @@ -274,7 +277,7 @@ def test_multiple_expval_with_prods():


@pytest.mark.usefixtures("use_legacy_opmath")
def test_multiple_expval_with_Tensors_legacy_opmath():
def test_multiple_expval_with_tensors_legacy_opmath():
mps, expected_exec, expected_shots = (
[qml.expval(qml.PauliX(0)), qml.expval(qml.operation.Tensor(qml.PauliX(0), qml.PauliY(1)))],
1,
Expand Down
2 changes: 1 addition & 1 deletion tests/devices/qutrit_mixed/test_qutrit_mixed_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ def test_sample_observables(self):
qml.sample(qml.GellMann(0, 1) @ qml.GellMann(1, 1)), state, shots=shots
)
assert results_gel_1s.shape == (shots.total_shots,)
assert results_gel_1s.dtype == np.float64
assert results_gel_1s.dtype == np.float64 if qml.operation.active_new_opmath() else np.int64
assert sorted(np.unique(results_gel_1s)) == [-1, 0, 1]

@flaky
Expand Down
4 changes: 3 additions & 1 deletion tests/devices/test_default_qubit_tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -519,6 +519,7 @@ def test_four_qubit_parameters(self, init_state, op, func, theta, tol):
expected = func(theta) @ state
assert np.allclose(res, expected, atol=tol, rtol=0)

# pylint: disable=use-implicit-booleaness-not-comparison
def test_apply_ops_not_supported(self, mocker, monkeypatch):
"""Test that when a version of TensorFlow before 2.3.0 is used, the _apply_ops dictionary is
empty and application of a CNOT gate is performed using _apply_unitary_einsum"""
Expand Down Expand Up @@ -927,11 +928,12 @@ def test_three_qubit_no_parameters_broadcasted(self, broadcasted_init_state, op,
expected = np.einsum("ij,lj->li", mat, state)
assert np.allclose(res, expected, atol=tol, rtol=0)

@pytest.mark.usefixtures("use_new_opmath")
def test_direct_eval_hamiltonian_broadcasted_tf(self):
"""Tests that the correct result is returned when attempting to evaluate a Hamiltonian with
broadcasting and shots=None directly via its sparse representation with TF."""
dev = qml.device("default.qubit.tf", wires=2)
ham = qml.Hamiltonian(tf.Variable([0.1, 0.2]), [qml.PauliX(0), qml.PauliZ(1)])
ham = qml.ops.LinearCombination(tf.Variable([0.1, 0.2]), [qml.PauliX(0), qml.PauliZ(1)])

@qml.qnode(dev, diff_method="backprop", interface="tf")
def circuit():
Expand Down
3 changes: 2 additions & 1 deletion tests/devices/test_default_qubit_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -914,12 +914,13 @@ def test_three_qubit_no_parameters_broadcasted(
expected = qml.math.einsum("ij,lj->li", op_mat, state)
assert torch.allclose(res, expected, atol=tol, rtol=0)

@pytest.mark.usefixtures("use_new_opmath")
def test_direct_eval_hamiltonian_broadcasted_torch(self, device, torch_device, mocker):
"""Tests that the correct result is returned when attempting to evaluate a Hamiltonian with
broadcasting and shots=None directly via its sparse representation with torch."""

dev = device(wires=2, torch_device=torch_device)
ham = qml.Hamiltonian(
ham = qml.ops.LinearCombination(
torch.tensor([0.1, 0.2], requires_grad=True), [qml.PauliX(0), qml.PauliZ(1)]
)

Expand Down
Loading

0 comments on commit a47d9bc

Please sign in to comment.