Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support V2 primitives #136

Open
woodsp-ibm opened this issue Jan 19, 2024 · 15 comments · May be fixed by #197
Open

Support V2 primitives #136

woodsp-ibm opened this issue Jan 19, 2024 · 15 comments · May be fixed by #197
Assignees

Comments

@woodsp-ibm
Copy link
Member

woodsp-ibm commented Jan 19, 2024

What should we add?

The Sampler and Estimator are being updated with new V2 versions that change the interface and will be available from Qiskit 1.0.0. V1 versions will at some point be deprecated and removed. Algorithms should add support for V2 versions and ultimately drop support for V1 once they are removed.

@tnemoz
Copy link
Contributor

tnemoz commented Jul 29, 2024

Is it possible to submit some small PR to help supporting V2 primitives? Like, would a PR allowing VQE to support both V1 and V2 be useful? Or is it better to wait for qiskit-community/qiskit-machine-learning#817 to be merged? Or, another possibility, is it something you're already working on and the PR won't help much?

@woodsp-ibm
Copy link
Member Author

@tnemoz There is no work being done by present maintainers (IBM) on V2 (or ISA #164) changes for algorithms. See disclaimer in readme

Qiskit Algorithms is no longer officially supported by IBM.

Like any other Apache 2 licensed code, you are free to use it or/and extend it, but please be aware that it is under your own risk.

The Qiskit Machine Learning maintainers, in that PR you linked, they have copied just the parts from algos that are needed/used in ML to be able to maintain/support them there going forwards so its independent. Qiskit Optimization has a PR that does likewise qiskit-community/qiskit-optimization#623

I do not think either will change VQE though as neither use it. There is a fair amount overall to be changed here, if algorithms were kept like it is and to be maintained going forwards.

@LenaPer
Copy link

LenaPer commented Aug 5, 2024

Would it be possible to be assigned to this issue with @tnemoz ? We'd like to give it a try if possible.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 5, 2024

The new BaseSamplerV2 and BaseEstimatorV2 from Qiskit don't have an option attribute like the V1 used to, while the BasePrimitiveV2 from qiskit-ibm-runtime does.

Is it worth it to add qiskit-ibm-runtime as a dependency and to check with isinstance whether it is actually possible to set these options? Something along the lines of:

if options is not None:
    if not isinstance(estimator, qiskit_ibm_runtime.BaseEstimatorV2):
        raise ValueError("options is only supported for a runtime Estimator")

This problem happens for instance for migrating ComputeUncompute to a V2 Sampler.

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Aug 6, 2024

@LenaPer @tnemoz I assigned the issue to you as requested. Now clearly to support runtime it needs the ISA change as well. Very shortly V1 primitives will be removed from runtime (they have been deprecated there for a while) and those remaining in Qiskit (the reference ones that run classically) will be deprecated. As was noted above Qiskit Machine Learning has copied over what was used from here, Qiskit Optimization likewise. ML has yet to do anything about ISA and V2 while Opt does have a draft PR addressing things. If you go ahead and look at things, perhaps in more depth than you might have done so far, and decide you still want to move forwards perhaps outline what/how you want to change things to have a discussion up front before really doing too much. In regards of the runtime comment above, in places we do have optional install checks so that the check for runtime could be done more conditionally based on it being installed or not, rather than having it as a hard dependency.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 8, 2024

If you go ahead and look at things, perhaps in more depth than you might have done so far, and decide you still want to move forwards perhaps outline what/how you want to change things to have a discussion up front before really doing too much.

The goal would be to remove the support for V1 primitives since those are going to be deprecated and add ISA support, in a similar fashion to what's being done in the qiskit-optimization PR. Eventually, I think the goal would be to be able to use every algorithm in here with the Runtime and Aer primitives (as noted above, the qiskit ones have a somewhat different interface than these two) with the least friction possible to move from one to another.

If that's OK with you, we'll issue a Draft PR shortly that will address only a short part of the issue (probably on minimal eigensolvers or phase estimation), just so that we can discuss with actual examples whether things move in the right direction according to you. That would avoid a painful review where the same mistake is repeated throughout all files.

@woodsp-ibm
Copy link
Member Author

Sounds good. Just supporting V2 will be a breaking change, in regards of code done with the Qiskit or Aer primitives that still works, but since its broken with Runtime, as V1 has already been removed and V1 are about to be deprecated in Qiskit, just focusing on V2 seems fine.

I am not sure what interface difference exists, having not looked in detail at them in a while. The way algorithms was done with V1 was even if there were construction differences, as the primitives are built and passed to the algorithms, as long as the run is the same that mattered not. I think its the case that the options on the run could vary with V1, but since options are implementation dependent (though shots was often common) algorithms would not, in general, alter them leaving them as they set during construction.

I had put some thoughts into #164 (the ISA one). Doing it similarly to how Opt did things is ok - there the return of the new Sampler was switched back to probabilities to make it compatible with how the algos processed V1. I had thought for V2 Sampler usage maybe things should go over to counts - especially so if this is reflected in any result as it would be more compatible with what is expected now etc.

Sure doing a draft PR and starting with some chosen subset seems great. Might want to try and find both an Estimator and Sampler example to see how it looks across both when used here.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 9, 2024

@woodsp-ibm The Draft PR is #197. Let us know what you think!

Might want to try and find both an Estimator and Sampler example to see how it looks across both when used here.

I might have overlooked this 😅

@tnemoz tnemoz linked a pull request Aug 9, 2024 that will close this issue
10 tasks
@tnemoz
Copy link
Contributor

tnemoz commented Aug 15, 2024

While adapting the code for Grover, I noticed that some tests would randomly fail, even though I set a seed for the StatevectorSampler. In particular, the test that fails is this one:

    @idata(itertools.product(["shots"], [[1, 2, 3], None, 2]))
    @unpack
    def test_iterations_with_good_state_sample_from_iterations(self, use_sampler, iterations):
        """Test the algorithm with different iteration types and with good state"""
        grover = self._prepare_grover(use_sampler, iterations, sample_from_iterations=True)
        problem = AmplificationProblem(Statevector.from_label("111"), is_good_state=["111"])
        result = grover.amplify(problem)
        self.assertEqual(result.top_measurement, "111")

The problem comes from these lines in grover.py:

            if self._sample_from_iterations:
                power = algorithm_globals.random.integers(power)

During the test, power is set to 2, which means that this line samples a new power being either 0 or 1. If it samples 1, then everything works fine, but if it samples 0, no oracle is added to the circuit during the single iteration of the algorithm, which results in a uniform distribution over the possible bitstrings.

Using the SamplerV1, it turns out that 111 is still the most sampled bitstring, but unless I'm missing something, it's not something that is relevant to test: we're supposed to get the uniform distribution if we don't add an oracle, it just so happens that when using the seed 123, 111 is the most sampled outcome, but using the seed 42 for instance fails the test.

What would be the best way to deal with this? I must admit that I don't really understand the point of this option, so I'm unsure what's the best course of action here. Does it make sense to use this option when performing a single iteration with a given power?

Note that the test would (probably) pass if we sampled a power in (0, power) instead of [0, power), since that would ensure that we at least progressed a bit in the right direction. But once again, since I don't really understand this option in the first place, I don't know whether it would make sense to do such a thing.

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Aug 15, 2024

@Cryoris would you have any response to the prior comment above? I think you had done refactoring at some point in Grover and may have more insight.

@woodsp-ibm
Copy link
Member Author

I will just add this here as further changes around the primitives it seems a utility method init_observable is now deprecated in the newly released Qiskit 1.2.0 so this ideally needs to be addressed too. In searching this repo it's used in a few places, this message comes from one of them

/home/runner/work/qiskit-algorithms/qiskit-algorithms/qiskit_algorithms/gradients/lin_comb/lin_comb_estimator_gradient.py:144: 
DeprecationWarning: The function ``qiskit.primitives.utils.init_observable()`` is deprecated as of qiskit 1.2. It will be removed no 
earlier than 3 months after the release date. Use the constructor of ``SparsePauliOp`` instead.

@tnemoz
Copy link
Contributor

tnemoz commented Aug 30, 2024

@woodsp-ibm A question concerning the use of options. I'll take the example of ComputeUncompute, but this can also be seen in other files for the Estimator, like the gradients one.

There are a lot if lines that are used to update the local options and to define which options to use. But as far as I can tell, these options are only ever used in the Sampler's run method. The thing is, the V2 Sampler only supports the number of shots in the run method now, so it seems a bit overkill to support arbitrary dictionaries.

We could let it like this in case additional options are used, but the problem is that some lines now break with the V2 primitives. For instance, this line:

        opts = copy(self._sampler.options)

This would work for a SamplerV2 from qiskit-ibm-runtime, but not a StatevectorSampler from qiskit. The fact that the API between these two objects is different makes it quite unclear to know how to deal with this.

I initially wanted to simply replace every mention of options by a single int representing the number of shots, but there's one thing I'm unsure about. The local options are passed to the AlgorithmJob constructor here:

        return AlgorithmJob(ComputeUncompute._call, sampler_job, circuits, self._local, local_opts)

And I'm unsure why, which means I'm also unsure about how to replace it, especially since it [seems to be tested]( self.assertEqual(result.options.dict, {"shots": 1024})):

            self.assertEqual(result.options.__dict__, {"shots": 1024})

What would be the best course of action here?

Supporting only the number of shots for the options also considerably simplifies the code.

If it can help discussing it, I can update the draft PR with these changes to better show what I'm talking about.

@woodsp-ibm
Copy link
Member Author

woodsp-ibm commented Aug 30, 2024

Options was part of primitive base class before for the V1 versions. We passed it so you could run things without creating a new instance/primitive with new default options, but could re-use it/change options, like the other primitives of the time. If shots is all that is supported generically now your suggestion to limit things to that makes sense. (I think its precision for the Estimator right).

Why was options passed to the AlgorithmJob... I imagine we would have done that for compatibility with the other primitives and what they did. E..g here in the original Qiskit Sampler https://github.com/Qiskit/qiskit/blob/8e5fab6c7176edcb134e9b00d141efa32f2fe3db/qiskit/primitives/sampler.py#L145 The creation of the job in the newer statevector_sampler looks different and maybe that should now be more the model/reference so as to have things compatibly done.

Does that help at all?

@tnemoz
Copy link
Contributor

tnemoz commented Sep 2, 2024

Does that help at all?

It definitely does, thanks!

@tnemoz
Copy link
Contributor

tnemoz commented Sep 3, 2024

@woodsp-ibm I have two questions:

  • What would be the best way to test the insertion of the transpiler in the algorithms? On the one hand, I want to test it on every already existing test, but on the other hand, this increases the time taken to run tests (and makes the whole thing look ugly, e.g.).
  • I'm not sure to understand the purpose of these lines of test:
        with self.subTest("check final accuracy"):
            self.assertAlmostEqual(result.fun[0], 0.473, places=3)

When I changed ComputeUncompute to support BaseSamplerV2, this test broke, even though all the tests for ComputeUncompute pass (though I must say that I had to reduce the expected precision to 1e-2 in these tests). Furthermore, I'm consistently hitting 0.298... instead of the expected 0.473.... What bothers me is that I don't see where the discrepancy could come from. Note that I changed the initial seed that was set to be sure that it's not just some random luck that makes us pass the test.

Also, when I increase the maxiter to higher values like 10 or 100, I seem to get closer to a negative value, which would indicate that this value in the test has nothing special and is clearly seed-dependent. But even when I change the seed, I end up around 0.3, far from the expected value. Do you have any idea about what could be going on? Or would you have a simple case using which I could test whether I'm getting the right results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants