Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add test workflow for legacy opmath #5435

Merged
merged 67 commits into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from 65 commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
5f2528f
add use_new_opmath variable
lillian542 Mar 22, 2024
3e1905e
[ci skip]
lillian542 Mar 22, 2024
27352df
add initial legacy_opmath workflow
lillian542 Mar 22, 2024
11fea82
Merge branch 'master' into test_legacy_opmath
mudit2812 Mar 22, 2024
da16e5b
Merge branch 'master' into test_legacy_opmath
lillian542 Apr 3, 2024
a85f990
trigger ci
lillian542 Apr 3, 2024
60f61a7
for science
lillian542 Apr 3, 2024
da9a5d8
see if this triggers full ci
lillian542 Apr 3, 2024
f97f295
or does this make ci run?
lillian542 Apr 3, 2024
3699cf7
will ci still run?
lillian542 Apr 3, 2024
639f411
undo previous test change
lillian542 Apr 3, 2024
4d2657d
try to add line explicitly setting __use_new_opmath
lillian542 Apr 3, 2024
8ef1ac8
fix syntax for two line run statement
lillian542 Apr 3, 2024
cbaef13
temporarily run on pull request
lillian542 Apr 3, 2024
742d548
don't run on pull requests
lillian542 Apr 3, 2024
c823f14
also add variable to interface-unit-tests
lillian542 Apr 3, 2024
9a30cd5
try this
lillian542 Apr 3, 2024
52248a9
run on PR
lillian542 Apr 3, 2024
8cbd9e2
uncomment use_new_opmath kwarg
lillian542 Apr 3, 2024
8188231
sanity check that tests are running as expected
lillian542 Apr 3, 2024
b62c727
set use_new_opmath to false for all
lillian542 Apr 3, 2024
97f9a7b
temporary test to sanity check testing behaviour
lillian542 Apr 4, 2024
0ef2a04
update legacy workflow to run on this branch instead of master for now
lillian542 Apr 4, 2024
e19db17
add pytest fixture and input variable to optionally disable op_math
lillian542 Apr 4, 2024
a73f324
update to use the disable-opmath fixture
lillian542 Apr 4, 2024
04b6402
remove old commented out code
lillian542 Apr 4, 2024
f37925a
use and evaluate string instead of using yaml boolean
lillian542 Apr 4, 2024
2dd4620
update value in legacy_op_math workflow
lillian542 Apr 4, 2024
e2f37e1
pass value for disable_new_opmath through to unit-test.yml
lillian542 Apr 4, 2024
2571b1b
use quotations when passing variable
lillian542 Apr 4, 2024
668591c
add option to devices conftest too
lillian542 Apr 4, 2024
ab595f9
remove temporary sanity-check tests
lillian542 Apr 4, 2024
e1716e2
Legacy Hamiltonian converts Prods to Tensors
lillian542 Apr 5, 2024
668938a
change handling of disable-opmath input
lillian542 Apr 5, 2024
d3feda0
make disable-opmath input variable type consistent
lillian542 Apr 5, 2024
649c534
update failing tests
lillian542 Apr 5, 2024
bcc882d
Update tests/transforms/test_qcut.py
lillian542 Apr 5, 2024
f8e2e0e
test_autograd
Qottmann Apr 8, 2024
9eec37f
black formatting
Qottmann Apr 8, 2024
244ee08
update qutrit device test
lillian542 Apr 11, 2024
b103d23
run CI with lightning master instead of stable
lillian542 Apr 11, 2024
3828844
undo lightning thing
lillian542 Apr 12, 2024
cee0ad1
Merge branch 'master' into test_legacy_opmath
lillian542 Apr 12, 2024
1722b46
use lightning master again
lillian542 Apr 12, 2024
ec5d379
add codecov token
lillian542 Apr 12, 2024
b0d8c1e
Merge branch 'test_legacy_opmath' of github.com:PennyLaneAI/pennylane…
lillian542 Apr 12, 2024
17ed5ff
put external libraries back to lightning stable
lillian542 Apr 12, 2024
9301d22
make autograd test compatible with both
lillian542 Apr 12, 2024
837aee5
temporarily comment out test
lillian542 Apr 12, 2024
0ee82e7
update tests
lillian542 Apr 12, 2024
d31585e
update based on code review
lillian542 Apr 12, 2024
0ffda5d
add test for lines added to hamiltonian
lillian542 Apr 12, 2024
b50bf15
mark new opmath version as xfail
lillian542 Apr 12, 2024
1999627
Merge branch 'master' into test_legacy_opmath
lillian542 Apr 12, 2024
da3bfb7
update tests
lillian542 Apr 16, 2024
d9508ac
revert changes to hamiltonian
lillian542 Apr 16, 2024
994f834
revert added hamiltonian test
lillian542 Apr 16, 2024
37ddf42
update qcut tests
lillian542 Apr 16, 2024
42442b6
update fermi tests
lillian542 Apr 16, 2024
6977fff
update vqe tests
lillian542 Apr 16, 2024
0212bc6
Merge branch 'master' into test_legacy_opmath
lillian542 Apr 16, 2024
e6c59f6
update default_qubit test
lillian542 Apr 16, 2024
1f4e1d7
update device tests
lillian542 Apr 16, 2024
8eff78e
Merge branch 'master' into test_legacy_opmath
lillian542 Apr 16, 2024
13defd5
trigger ci
lillian542 Apr 16, 2024
f18265c
Update workflow to test on master and not run on every PR
lillian542 Apr 17, 2024
89e7d15
Merge branch 'master' into test_legacy_opmath
lillian542 Apr 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 24 additions & 6 deletions .github/workflows/interface-unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,11 @@ on:
required: false
type: string
default: ''
disable_new_opmath:
description: Whether to disable the new op_math or not when running the tests
required: false
type: string
default: "False"

jobs:
setup-ci-load:
Expand Down Expand Up @@ -155,6 +160,7 @@ jobs:
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: torch and not qcut and not finite-diff and not param-shift
requirements_file: ${{ strategy.job-index == 0 && 'torch.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


autograd-tests:
Expand Down Expand Up @@ -186,6 +192,7 @@ jobs:
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: autograd and not qcut and not finite-diff and not param-shift
disable_new_opmath: ${{ inputs.disable_new_opmath }}


tf-tests:
Expand Down Expand Up @@ -221,6 +228,7 @@ jobs:
pytest_additional_args: --splits 3 --group ${{ matrix.group }} --durations-path='.github/workflows/tf_tests_durations.json'
additional_pip_packages: pytest-split
requirements_file: ${{ strategy.job-index == 0 && 'tf.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


jax-tests:
Expand Down Expand Up @@ -256,6 +264,7 @@ jobs:
pytest_additional_args: --splits 5 --group ${{ matrix.group }} --durations-path='.github/workflows/jax_tests_durations.json'
additional_pip_packages: pytest-split
requirements_file: ${{ strategy.job-index == 0 && 'jax.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


core-tests:
Expand Down Expand Up @@ -291,6 +300,7 @@ jobs:
pytest_additional_args: --splits 5 --group ${{ matrix.group }} --durations-path='.github/workflows/core_tests_durations.json'
additional_pip_packages: pytest-split
requirements_file: ${{ strategy.job-index == 0 && 'core.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


all-interfaces-tests:
Expand Down Expand Up @@ -319,10 +329,11 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: true
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: all_interfaces
requirements_file: ${{ strategy.job-index == 0 && 'all_interfaces.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


external-libraries-tests:
Expand Down Expand Up @@ -351,11 +362,13 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: false
# using lightning master does not work for the tests with external libraries
install_pennylane_lightning_master: false
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: external
additional_pip_packages: pyzx pennylane-catalyst matplotlib stim
requirements_file: ${{ strategy.job-index == 0 && 'external.txt' || '' }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


qcut-tests:
Expand Down Expand Up @@ -384,10 +397,11 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: true
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: qcut
additional_pip_packages: kahypar==1.1.7 opt_einsum
disable_new_opmath: ${{ inputs.disable_new_opmath }}


qchem-tests:
Expand Down Expand Up @@ -416,10 +430,11 @@ jobs:
install_jax: false
install_tensorflow: false
install_pytorch: false
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: qchem
additional_pip_packages: openfermionpyscf basis-set-exchange
disable_new_opmath: ${{ inputs.disable_new_opmath }}

gradients-tests:
needs:
Expand Down Expand Up @@ -450,9 +465,10 @@ jobs:
install_jax: true
install_tensorflow: true
install_pytorch: true
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: ${{ matrix.config.suite }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


data-tests:
Expand Down Expand Up @@ -481,10 +497,11 @@ jobs:
install_jax: false
install_tensorflow: false
install_pytorch: false
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_markers: data
additional_pip_packages: h5py
disable_new_opmath: ${{ inputs.disable_new_opmath }}


device-tests:
Expand Down Expand Up @@ -529,10 +546,11 @@ jobs:
install_jax: ${{ !contains(matrix.config.skip_interface, 'jax') }}
install_tensorflow: ${{ !contains(matrix.config.skip_interface, 'tf') }}
install_pytorch: ${{ !contains(matrix.config.skip_interface, 'torch') }}
install_pennylane_lightning_master: false
install_pennylane_lightning_master: true
pytest_test_directory: pennylane/devices/tests
pytest_coverage_flags: ${{ inputs.pytest_coverage_flags }}
pytest_additional_args: --device=${{ matrix.config.device }} --shots=${{ matrix.config.shots }}
disable_new_opmath: ${{ inputs.disable_new_opmath }}


upload-to-codecov:
Expand Down
17 changes: 17 additions & 0 deletions .github/workflows/legacy_op_math.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: Legacy opmath tests

on:
schedule:
- cron: "0 0 2 * *"
pull_request: # this should be removed so it only runs on the schedule before merging
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
workflow_dispatch:

jobs:
tests:
uses: ./.github/workflows/interface-unit-tests.yml
secrets:
codecov_token: ${{ secrets.CODECOV_TOKEN }}
with:
branch: ${{ github.ref }} # this should be set back to master before merging
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
run_lightened_ci: false
disable_new_opmath: "True"
7 changes: 6 additions & 1 deletion .github/workflows/unit-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,11 @@ on:
required: false
type: string
default: ''
disable_new_opmath:
description: Whether to disable the new op_math or not when running the tests
required: false
type: string
default: "False"

jobs:
test:
Expand Down Expand Up @@ -170,7 +175,7 @@ jobs:
COV_CORE_DATAFILE: .coverage.eager
TF_USE_LEGACY_KERAS: "1" # sets to use tf-keras (Keras2) instead of keras (Keras3) when running TF tests
# Calling PyTest by invoking Python first as that adds the current directory to sys.path
run: python -m pytest ${{ inputs.pytest_test_directory }} ${{ steps.pytest_args.outputs.args }} ${{ env.PYTEST_MARKER }}
run: python -m pytest ${{ inputs.pytest_test_directory }} ${{ steps.pytest_args.outputs.args }} ${{ env.PYTEST_MARKER }} --disable-opmath=${{ inputs.disable_new_opmath }}

- name: Adjust coverage file for Codecov
if: inputs.pipeline_mode == 'unit-tests'
Expand Down
14 changes: 14 additions & 0 deletions pennylane/devices/tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,20 @@ def pytest_addoption(parser):
metavar="KEY=VAL",
help="Additional device kwargs.",
)
addoption(
"--disable-opmath", action="store", default="False", help="Whether to disable new_opmath"
)


# pylint: disable=eval-used
@pytest.fixture(scope="session", autouse=True)
def disable_opmath_if_requested(request):
"""Check the value of the --disable-opmath option and turn off
if True before running the tests"""
disable_opmath = request.config.getoption("--disable-opmath")
# value from yaml file is a string, convert to boolean
if eval(disable_opmath):
qml.operation.disable_new_opmath()


def pytest_generate_tests(metafunc):
Expand Down
21 changes: 15 additions & 6 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -178,18 +178,27 @@ def tear_down_thermitian():
# Fixtures for testing under new and old opmath


def pytest_addoption(parser):
parser.addoption(
"--disable-opmath", action="store", default="False", help="Whether to disable new_opmath"
)


# pylint: disable=eval-used
@pytest.fixture(scope="session", autouse=True)
def disable_opmath_if_requested(request):
disable_opmath = request.config.getoption("--disable-opmath")
# value from yaml file is a string, convert to boolean
if eval(disable_opmath):
qml.operation.disable_new_opmath()


@pytest.fixture(scope="function")
def use_legacy_opmath():
with disable_new_opmath_cm() as cm:
yield cm


# @pytest.fixture(scope="function")
# def use_legacy_opmath():
# with disable_new_opmath_cm():
# yield


@pytest.fixture(scope="function")
def use_new_opmath():
with enable_new_opmath_cm() as cm:
Expand Down
22 changes: 21 additions & 1 deletion tests/data/attributes/operator/test_operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,9 @@ def test_value_init(self, obs_in):
"""Test that a DatasetOperator can be value-initialized
from an observable, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

dset_op = DatasetOperator(obs_in)

assert dset_op.info["type_id"] == "operator"
Expand All @@ -95,6 +98,9 @@ def test_value_init(self, obs_in):
def test_bind_init(self, obs_in):
"""Test that DatasetOperator can be initialized from a HDF5 group
that contains a operator attribute."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

bind = DatasetOperator(obs_in).bind

dset_op = DatasetOperator(bind=bind)
Expand Down Expand Up @@ -124,6 +130,9 @@ def test_value_init(self, obs_in):
"""Test that a DatasetOperator can be value-initialized
from an observable, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

dset_op = DatasetOperator(obs_in)

assert dset_op.info["type_id"] == "operator"
Expand All @@ -135,6 +144,9 @@ def test_value_init(self, obs_in):
def test_bind_init(self, obs_in):
"""Test that DatasetOperator can be initialized from a HDF5 group
that contains an operator attribute."""
if not qml.operation.active_new_opmath() and isinstance(obs_in, qml.ops.LinearCombination):
obs_in = qml.operation.convert_to_legacy_H(obs_in)

bind = DatasetOperator(obs_in).bind

dset_op = DatasetOperator(bind=bind)
Expand All @@ -160,6 +172,9 @@ def test_value_init(self, op_in):
"""Test that a DatasetOperator can be value-initialized
from an operator, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(op_in, qml.ops.LinearCombination):
op_in = qml.operation.convert_to_legacy_H(op_in)

dset_op = DatasetOperator(op_in)

assert dset_op.info["type_id"] == "operator"
Expand All @@ -172,7 +187,9 @@ def test_value_init(self, op_in):
def test_value_init_not_supported(self):
"""Test that a ValueError is raised if attempting to serialize an unsupported operator."""

class NotSupported(Operator): # pylint: disable=too-few-public-methods
class NotSupported(
Operator
): # pylint: disable=too-few-public-methods, unnecessary-ellipsis
"""An operator."""

...
Expand All @@ -195,6 +212,9 @@ def test_bind_init(self, op_in):
"""Test that a DatasetOperator can be bind-initialized
from an operator, and that the deserialized operator
is equivalent."""
if not qml.operation.active_new_opmath() and isinstance(op_in, qml.ops.LinearCombination):
op_in = qml.operation.convert_to_legacy_H(op_in)

bind = DatasetOperator(op_in).bind

dset_op = DatasetOperator(bind=bind)
Expand Down
3 changes: 3 additions & 0 deletions tests/devices/default_qubit/test_default_qubit.py
Original file line number Diff line number Diff line change
Expand Up @@ -2058,6 +2058,9 @@ def test_differentiate_jitted_qnode(self, measurement_func):
"""Test that a jitted qnode can be correctly differentiated"""
import jax

if measurement_func is qml.var and not qml.operation.active_new_opmath():
pytest.skip(reason="Variance for this test circuit not supported with legacy opmath")

dev = DefaultQubit()

def qfunc(x, y):
Expand Down
7 changes: 5 additions & 2 deletions tests/devices/default_qubit/test_default_qubit_tracking.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,8 +255,11 @@ def test_single_expval(mps, expected_exec, expected_shots):
assert dev.tracker.totals["shots"] == 3 * expected_shots


@pytest.mark.xfail # TODO Prod instances are not automatically
@pytest.mark.usefixtures("use_new_opmath")
@pytest.mark.xfail(reason="bug in grouping for tracker with new opmath")
def test_multiple_expval_with_prods():
"""Can be combined with test below once the bug is fixed - there shouldn't
be a difference in behaviour between old and new opmath here"""
mps, expected_exec, expected_shots = (
[qml.expval(qml.PauliX(0)), qml.expval(qml.PauliX(0) @ qml.PauliY(1))],
1,
Expand All @@ -274,7 +277,7 @@ def test_multiple_expval_with_prods():


@pytest.mark.usefixtures("use_legacy_opmath")
def test_multiple_expval_with_Tensors_legacy_opmath():
def test_multiple_expval_with_tensors_legacy_opmath():
mps, expected_exec, expected_shots = (
[qml.expval(qml.PauliX(0)), qml.expval(qml.operation.Tensor(qml.PauliX(0), qml.PauliY(1)))],
1,
Expand Down
2 changes: 1 addition & 1 deletion tests/devices/qutrit_mixed/test_qutrit_mixed_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ def test_sample_observables(self):
qml.sample(qml.GellMann(0, 1) @ qml.GellMann(1, 1)), state, shots=shots
)
assert results_gel_1s.shape == (shots.total_shots,)
assert results_gel_1s.dtype == np.float64
assert results_gel_1s.dtype == np.float64 if qml.operation.active_new_opmath() else np.int64
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
assert sorted(np.unique(results_gel_1s)) == [-1, 0, 1]

@flaky
Expand Down
4 changes: 3 additions & 1 deletion tests/devices/test_default_qubit_tf.py
Original file line number Diff line number Diff line change
Expand Up @@ -519,6 +519,7 @@ def test_four_qubit_parameters(self, init_state, op, func, theta, tol):
expected = func(theta) @ state
assert np.allclose(res, expected, atol=tol, rtol=0)

# pylint: disable=use-implicit-booleaness-not-comparison
def test_apply_ops_not_supported(self, mocker, monkeypatch):
"""Test that when a version of TensorFlow before 2.3.0 is used, the _apply_ops dictionary is
empty and application of a CNOT gate is performed using _apply_unitary_einsum"""
Expand Down Expand Up @@ -927,11 +928,12 @@ def test_three_qubit_no_parameters_broadcasted(self, broadcasted_init_state, op,
expected = np.einsum("ij,lj->li", mat, state)
assert np.allclose(res, expected, atol=tol, rtol=0)

@pytest.mark.usefixtures("use_new_opmath")
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
def test_direct_eval_hamiltonian_broadcasted_tf(self):
"""Tests that the correct result is returned when attempting to evaluate a Hamiltonian with
broadcasting and shots=None directly via its sparse representation with TF."""
dev = qml.device("default.qubit.tf", wires=2)
ham = qml.Hamiltonian(tf.Variable([0.1, 0.2]), [qml.PauliX(0), qml.PauliZ(1)])
ham = qml.ops.LinearCombination(tf.Variable([0.1, 0.2]), [qml.PauliX(0), qml.PauliZ(1)])

@qml.qnode(dev, diff_method="backprop", interface="tf")
def circuit():
Expand Down
3 changes: 2 additions & 1 deletion tests/devices/test_default_qubit_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -914,12 +914,13 @@ def test_three_qubit_no_parameters_broadcasted(
expected = qml.math.einsum("ij,lj->li", op_mat, state)
assert torch.allclose(res, expected, atol=tol, rtol=0)

@pytest.mark.usefixtures("use_new_opmath")
lillian542 marked this conversation as resolved.
Show resolved Hide resolved
def test_direct_eval_hamiltonian_broadcasted_torch(self, device, torch_device, mocker):
"""Tests that the correct result is returned when attempting to evaluate a Hamiltonian with
broadcasting and shots=None directly via its sparse representation with torch."""

dev = device(wires=2, torch_device=torch_device)
ham = qml.Hamiltonian(
ham = qml.ops.LinearCombination(
torch.tensor([0.1, 0.2], requires_grad=True), [qml.PauliX(0), qml.PauliZ(1)]
)

Expand Down
Loading
Loading