Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QHC-697] Adding Calibration checkpoints #777

Open
wants to merge 38 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
d7f6cec
Adding calibration checkpoint and diagnose
GuillermoAbadLopez Aug 16, 2024
887c6aa
Update calibration_controller.py
GuillermoAbadLopez Aug 16, 2024
ce3ecb8
make `check_points_passed_comparison`
GuillermoAbadLopez Aug 16, 2024
b841d3e
reorder calibration controller methods
GuillermoAbadLopez Aug 16, 2024
dc7fe13
Update calibration_controller.py
GuillermoAbadLopez Aug 16, 2024
cc66365
update docstring
GuillermoAbadLopez Aug 16, 2024
9a5ca11
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Aug 23, 2024
7235a36
Update calibration_controller.py
GuillermoAbadLopez Aug 23, 2024
12f9d1c
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Aug 23, 2024
f74acc1
Add changelog
GuillermoAbadLopez Aug 23, 2024
f4816ad
Update changelog-dev.md
GuillermoAbadLopez Aug 23, 2024
cbf484b
Update changelog-dev.md
GuillermoAbadLopez Aug 23, 2024
5e2024e
Improve documentation
GuillermoAbadLopez Aug 23, 2024
edefa3f
Update calibration_controller.py
GuillermoAbadLopez Aug 23, 2024
8c8ff1d
Update calibration_controller.py
GuillermoAbadLopez Aug 23, 2024
1651ad1
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Sep 12, 2024
e2eee93
Apply suggestions from code review
GuillermoAbadLopez Sep 13, 2024
b9b4b67
Add `any()` for dependencies checking
GuillermoAbadLopez Sep 13, 2024
a7d3347
Add documentation, improve read of been-calibrated_succesfully and cr…
GuillermoAbadLopez Sep 13, 2024
b50a299
new logic for diagnose
GuillermoAbadLopez Sep 13, 2024
82609be
Update calibration_controller.py
GuillermoAbadLopez Sep 13, 2024
5a856a3
Change diagnose for diagnose_checkpoints
GuillermoAbadLopez Sep 13, 2024
da725d8
Adding test for CalibrationNode
GuillermoAbadLopez Sep 25, 2024
689afcc
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Sep 25, 2024
be93291
Adding test for run_auto_calibration method
GuillermoAbadLopez Sep 25, 2024
fcdce95
Merge branch 'calibration_checkpoints' of https://github.com/qilimanj…
GuillermoAbadLopez Sep 25, 2024
76f2f3d
Adding tests for calibration
GuillermoAbadLopez Sep 25, 2024
3e74a45
Fix calls in tests for calibration
GuillermoAbadLopez Sep 25, 2024
2cb5aed
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Nov 5, 2024
d590040
Solve ruff
GuillermoAbadLopez Nov 5, 2024
25c18f2
Adding more tests
GuillermoAbadLopez Nov 5, 2024
d259f63
Adding more tests
GuillermoAbadLopez Nov 5, 2024
80f64eb
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Nov 6, 2024
09a2e55
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Nov 6, 2024
c4ab366
Improving tests
GuillermoAbadLopez Nov 6, 2024
65877de
Update test_calibration_controller.py
GuillermoAbadLopez Nov 6, 2024
a96c9b6
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Nov 7, 2024
ef3ec61
Merge branch 'main' into calibration_checkpoints
GuillermoAbadLopez Jan 21, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions docs/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,21 @@

[#770](https://github.com/qilimanjaro-tech/qililab/pull/770)

- Added `checkpoints` logic for calibration, to skip parts of the graph that are already good to go.
This diagnose of the `checkpoints` starts from the first ones, until finds the first in each branch, that doesn't pass.

Example:

If \[i\] are notebooks and \[V\] or \[X\] are checkpoints that pass or not respectively, in a graph like:

- `[0] - [1] - [V] - [3] - [4] - [X] - [5]`, calibration would start from notebook 3

- `[0] - [1] - [V] - [3] - [4] - [V] - [5]`, calibration would start from notebook 5

- `[0] - [1] - [X] - [3] - [4] - [.] - [5]`, calibration would start from notebook 0 (Notice that the second `checkpoints` is not checked, since the first one already fails)

[#777](https://github.com/qilimanjaro-tech/qililab/pull/777)

- Added delay variables to Qblox qprogram implementation. The delays are added in the runcard in nanoseconds and they can be positive or negative scalars (negative delays will make the rest of buses wait). The delay is a wait applied to each iteration of a loop where the bus is present.

Example:
Expand Down
159 changes: 136 additions & 23 deletions src/qililab/calibration/calibration_controller.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,10 @@ class CalibrationController:

The calibration process is structured into three levels of methods:

1. **Highest Level Method**: The ``run_automatic_calibration()`` method finds all the end nodes of the graph (`leaves`, those without further `dependents`) and runs ``calibrate_all()`` on them.
1. **Highest Level Method**: The ``run_automatic_calibration()`` method finds all the end nodes of the graph (`leaves`, those without further `dependents`) and runs ``diagnose()`` and then ``calibrate_all()`` on the ones needed.

2. **Mid-Level Method**: ``calibrate_all()``.
2. **Mid-Level Methods**: ``diagnose()`` and ``calibrate_all()``.
- ``diagnose()`` searches for the first bad ``checkpoint``, and marks it. Such that the next call of ``calibrate_all()`` starts just after the last passed ``checkpoint``.
- ``calibrate_all(node)`` starts from the `roots` that ``node`` depends on, and moves forwards (`dependency -> dependant`) until ``node``, checking the last time executions at each step.

3. **Low-Level Method**: ``calibrate()`` is the method you would be calling during this process to interact with the ``nodes``.
Expand Down Expand Up @@ -117,6 +118,8 @@ class CalibrationController:
nb_path="notebooks/second.ipynb",
qubit_index=qubit,
sweep_interval=np.arange(start=0, stop=19, step=1),
check_point=True,
check_value={"fidelity": 0.85},
)
nodes[second[qubit].node_id] = second[qubit]

Expand Down Expand Up @@ -184,32 +187,26 @@ def __init__(
A node will be skipped if the ``drift timeout`` is bigger than the time since its last calibration. Defaults to 7200 (2h).
"""

def calibrate_all(self, node: CalibrationNode):
"""Calibrates all the nodes sequentially.
def run_automatic_calibration(self) -> dict[str, dict]:
"""Runs the full automatic calibration procedure and retrieves the final set parameters and achieved fidelities dictionaries.

Args:
node (CalibrationNode): The node where we want to start the `calibration_all()` on. Normally you would want
this node to be the furthest node in the calibration graph.
"""
logger.info("WORKFLOW: Calibrating all %s.\n", node.node_id)
for n in self._dependencies(node):
self.calibrate_all(n)
This is the primary interface for our calibration procedure and the highest level algorithm, which finds all the end nodes of the graph
(`leaves`, those without further `dependents`) and runs ``diagnose()`` and then ``calibrate_all()`` on the ones needed.

# You can skip it from the `drift_timeout`, but also skip it due to `been_calibrated()`
# If you want to start the calibration from the start again, just decrease the `drift_timeout` or remove the executed files!
if not node.been_calibrated:
if node.previous_timestamp is None or self._is_timeout_expired(node.previous_timestamp, self.drift_timeout):
self.calibrate(node)
self._update_parameters(node)
If ``checkpoints`` are present, ``diagnose()`` will skip the parts of the graph that are already good to go.

node.been_calibrated = True
# After passing this block `node.been_calibrated` will always be True, so it will not be recalibrated again.
``diagnose()`` starts from the first nodes, until finds the first in each branch, that doesn't pass.

def run_automatic_calibration(self) -> dict[str, dict]:
"""Runs the full automatic calibration procedure and retrieves the final set parameters and achieved fidelities dictionaries.
Example:

If \[i\] are notebooks and \[V\] or \[X\] are checkpoints that pass or not respectively, in a graph like:

- `[0] - [1] - [V] - [3] - [4] - [X] - [5]`, calibration would start from notebook 3

- `[0] - [1] - [V] - [3] - [4] - [V] - [5]`, calibration would start from notebook 5

- `[0] - [1] - [X] - [3] - [4] - [.] - [5]`, calibration would start from notebook 0 (Notice that the second `checkpoints` is not checked, since the first one already fails)

This is the primary interface for our calibration procedure and the highest level algorithm, which finds all the end nodes of the graph
(`leaves`, those without further `dependents`) and runs ``calibrate_all()`` on them.

Returns:
dict[str, dict]: Dictionary for the last set parameters and the last achieved fidelities. It contains two dictionaries (dict[tuple, tuple]) in the keys:
Expand All @@ -225,6 +222,9 @@ def run_automatic_calibration(self) -> dict[str, dict]:
self.node_sequence[node] for node, out_degree in self.calibration_graph.out_degree() if out_degree == 0
]

for node in highest_level_nodes:
self.diagnose(node)

for node in highest_level_nodes:
self.calibrate_all(node)

Expand All @@ -235,6 +235,119 @@ def run_automatic_calibration(self) -> dict[str, dict]:
)
return self.get_qubit_fidelities_and_parameters_df_tables()

def calibrate_all(self, node: CalibrationNode) -> None:
"""Calibrates all the nodes sequentially.

Args:
node (CalibrationNode): The node where we want to start the `calibration_all()` on. Normally you would want
this node to be the furthest node in the calibration graph.
"""
logger.info("WORKFLOW: Calibrating all %s.\n", node.node_id)

# If diagnose has found a checkpoint bad, start the calibration just after the last passed checkpoint before that bad one.
# O - [V] - [O] - [V] - [0] - [X] - [ ] - [ ] - [ ] - ... we leave the next ones empty (.), after finding the first bad
# checkpoint, so that we can just find the first [V], going from right to left in ``calibrate_all()`` calls and start there.
if node.check_point_passed is True:
GuillermoAbadLopez marked this conversation as resolved.
Show resolved Hide resolved
return

for n in self._dependencies(node):
self.calibrate_all(n)

# You can skip it from the `drift_timeout`, but also skip it due to `been_calibrated()`
# If you want to start the calibration from the start again, just decrease the `drift_timeout` or remove the executed files!
if not node.been_calibrated:
if node.previous_timestamp is None or self._is_timeout_expired(node.previous_timestamp, self.drift_timeout):
self.calibrate(node)
self._update_parameters(node)

node.been_calibrated = True
# After passing this block `node.been_calibrated` will always be True, so it will not be recalibrated again.

def diagnose(self, node: CalibrationNode) -> bool:
GuillermoAbadLopez marked this conversation as resolved.
Show resolved Hide resolved
"""Searches for the first bad ``checkpoint``, and if found, we start the calibration process with the recursive
``calibrate_all()`` calls, just after the last passed ``checkpoint``.

This diagnose of the `checkpoints` starts from the first ones, until finds the first in each branch, that doesn't pass.

Example:

If \[i\] are notebooks and \[V\] or \[X\] are checkpoints that pass or not respectively, in a graph like:

- `[0] - [1] - [V] - [3] - [4] - [X] - [5]`, calibration would start from notebook 3

- `[0] - [1] - [V] - [3] - [4] - [V] - [5]`, calibration would start from notebook 5

- `[0] - [1] - [X] - [3] - [4] - [.] - [5]`, calibration would start from notebook 0 (Notice that the second `checkpoints` is not checked, since the first one already fails)

Args:
node (CalibrationNode): The node to diagnose.

Returns:
bool: Wether the diagnose process has finished or not.
"""
logger.info("WORKFLOW: Diagnosing %s.\n", node.node_id)
diagnose_finished = False

for n in self._dependencies(node):
diagnose_finished = self.diagnose(n)
GuillermoAbadLopez marked this conversation as resolved.
Show resolved Hide resolved

# When we have encountered a dependency checkpoint bad, we should not diagnose further:
# [O] - [V] - [O] - [V] - [0] - [X] - [ ] - [ ] - [ ] - ... we leave the next ones empty (.), after finding the first bad
# checkpoint, so that we can just find the first [V], going from right to left in ``calibrate_all()`` calls and start there.
if diagnose_finished is True:
GuillermoAbadLopez marked this conversation as resolved.
Show resolved Hide resolved
return True

# You can skip it from the `drift_timeout`, but also skip it due to `been_calibrated()`
# If you want to start the calibration from the start again, just decrease the `drift_timeout` or remove the executed files!
if (
node.check_point
and not node.been_calibrated
and (
node.previous_timestamp is None or self._is_timeout_expired(node.previous_timestamp, self.drift_timeout)
)
):
self.calibrate(node)
if node.output_parameters is not None and self._check_point_passed_comparison(node):
node.check_point_passed = True
self._update_parameters(node)
node.been_calibrated = True # TODO: Think about this, where together with its conditional above...

else:
logger.info(
"WORKFLOW: %s checkpoint failed, calibration will start just after the previously passed checkpoint.\n",
node.node_id,
)
node.check_point_passed = False

# If the node was a checkpoint, depending on the results, we can stop or not diagnosing the next nodes.
return not node.check_point_passed

# If no checkpoint is found, we can continue diagnosing the next nodes.
return False

def _check_point_passed_comparison(self, node: CalibrationNode) -> bool:
"""Computes whetter a checkpoint passed, based on whether the fidelities of the node are greater or equal to the check values.

Args:
node (CalibrationNode): The node to check the fidelities of.

Returns:
bool: Whether the fidelities of the node are greater or equal to the check values.
"""
# If no check_value, any fidelity is good.
if node.check_value is None:
return True

# If check_value is present, but fidelities are not, the checkpoint doesn't pass.
if node.output_parameters is None:
return False

# If check_value and fidelities are present, all fidelities must be greater or equal to the check values.
return all(
fidelity_v >= node.check_value[fidelity_k]
for fidelity_k, fidelity_v in node.output_parameters["fidelities"]
)

def get_qubit_fidelities_and_parameters_df_tables(self) -> dict[str, pd.DataFrame]:
"""Generates the 1q, 2q, fidelities and parameters dataframes, with the last calibrations.

Expand Down
23 changes: 21 additions & 2 deletions src/qililab/calibration/calibration_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,10 @@ class CalibrationNode: # pylint: disable=too-many-instance-attributes
the :class:`.CalibrationController` won't do the graph mapping properly, and the calibration will fail. Defaults to None.
input_parameters (dict | None, optional): Kwargs for input parameters to pass and be interpreted by the notebook. Defaults to None.
sweep_interval (np.ndarray | None, optional): Array describing the sweep values of the experiment. Defaults to None, which means the one specified in the notebook will be used.
check_point (bool, optional): Flag whether this notebook will be used to check if execute or not the ones before them. Checkpoints should ideally be fast and
reliable, and its dependency with previous notebooks strictly and phisically dependant.
check_value (float | None, optional): Value to decide whether the checkpoint was passed successfully.


Examples:

Expand Down Expand Up @@ -128,8 +132,9 @@ class CalibrationNode: # pylint: disable=too-many-instance-attributes
second = CalibrationNode(
nb_path="notebooks/second.ipynb",
qubit_index=qubit,

sweep_interval=np.arange(start=0, stop=19, step=1),
check_point=True,
check_value={"fidelity": 0.85},
)
nodes[second.node_id] = second

Expand Down Expand Up @@ -223,7 +228,8 @@ def fit(xdata, results):
}
)

where the ``platform_parameters`` are a list of parameters to set on the platform.
where the ``platform_parameters`` are a list of parameters to set on the platform. And the ``fidelities`` are for showing results in the calibration report,
or for using the checkpoints in the calibration with the ``check_point`` and ``check_value`` arguments.

.. note::

Expand All @@ -238,6 +244,8 @@ def __init__(
node_distinguisher: int | str | None = None,
input_parameters: dict | None = None,
sweep_interval: np.ndarray | None = None,
check_point: bool = False,
check_value: dict | None = None,
):
if len(nb_path.split("\\")) > 1:
raise ValueError("`nb_path` must be written in unix format: `folder/subfolder/.../file.ipynb`.")
Expand Down Expand Up @@ -280,6 +288,17 @@ def __init__(
self.been_calibrated: bool = False
"""Flag whether this notebook has been already calibrated in a concrete run. Defaults to False."""

self.check_point: bool = check_point
"""Flag whether this notebook will be used to check if execute or not the ones before them. Checkpoints should ideally be fast and
reliable, and its dependency with previous notebooks strictly and phisically dependant. If not a check_point (default), then is False."""

self.check_value: dict | None = check_value if self.check_point else None
"""Values to decide whether the checkpoint was passed successfully. They have to have the same structure as the ``output_parameters["fidelities"]``
dictionary, in the corresponding notebook itself. If node is not a check_point (default), then its None."""

self.check_point_passed: bool | None = None
"""Flag whether this notebook has passed the check value when checked. If the notebook is not a check_point (default), then is None."""

def run_node(self) -> float:
"""Executes the notebook, passing the needed parameters and flags.

Expand Down
Loading