Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizer bounds improvement #9718

Closed
woodsp-ibm opened this issue Mar 3, 2023 · 10 comments
Closed

Optimizer bounds improvement #9718

woodsp-ibm opened this issue Mar 3, 2023 · 10 comments
Labels
mod: algorithms Related to the Algorithms module type: feature request New feature or request

Comments

@woodsp-ibm
Copy link
Member

woodsp-ibm commented Mar 3, 2023

What should we add?

I would like to discuss improving the way bounds are handled and settle/implement such an improvment:

For variational algorithms, like VQE, that use an optimizer, while the initial_point (x0) for the minimize can be passed through the algorithm, if an optimizer supports/requires bounds the way a user might influence this for the algorithm is not obvious. For instance using BOBYQA with VQE works with RealAmplitudes, but fails with UCC since the later ends up with an unbounded setting which fails on the current code. SNOBIT needs bounds and raises an error to that effect, but how to get it to work is asked. NLOpt global optimizers need a bounds and internally this defaults in the case of any limit being unbounded to something that is not (-3pi or 3pi for lower, upper respectively).

In VQE etc initial point used to be informed by the ansatz (preferred_init_point) such that if an explicit value was not passed by the user then it would look to the ansatz, and if that did not exist then just picked a random point. This implicit informing by ansatz was rather hidden and it was preferred to remove this mechanism. Now the initial point must be set by a user and if its left at the default it will be random. I start since things are similar and action was taken for initial point.

An anstaz can presently inform about parameter_bounds. RealAmplitudes does (-pi, pi), Its NLocal parent defaults to None and UCC extends this (via EvolvedOpAnstz) but does not override this. parameter_bounds has a setter, if you know about it and how this is used by VQE etc., so the value can be set/overridden. For a plain parameterized circuit this property does not exist, but of course, given things are Python. it can set (added) at runtime to inform the bounds for VQE. This allows one to set the bounds such that VQE will run with UCC with BOBYQA etc.

So this is to discuss a possible improvement to variational algos like VQE in regards of bounds:

We could simply just document better how things are currently , a tutorial/howto that covers things.

Another option (which can come with doc improvements related to bounds) would be adding an optional bounds parameter that defaults to None (unbounded)

We could, like was done with initial_point remove the information from the ansatz and require a user to explicitly pass a bounds if the optimizer requires it (optimizers have flags to query such) - but using minimizer protocol this is unknown to the algo. When using an optimizer the variational algo can simply raise an error if no bounds is given and the optimizer needs it. For minimizer it would be reliant on whatever the underlying behavior of the code is in regards of bounds if it fails to get it.

We could leave parameter_bounds on ansatz and make it clear in some improved docs that this is also used and how. A variational algo could let any user defined bounds override any informed by the ansatz. The goal being to have the way bounds is set be more exposed to the user rather than how things are currently and make it easier for them.

@Cryoris @mrossinek @adekusar-drl I would welcome opinions, other ideas....

@adekusar-drl
Copy link
Contributor

Sounds like a documentation issue. Considering that the optimizer passed to an algorithm can be a callable we can't verify if the bounds are set properly and the algorithm can't enforce them from the ansatz parameters. I think a dedicated tutorial is an overkill, but a section somewhere in VQE makes sense.

If we could have a single place in the codebase where we could verify and set bounds for all variational algorithms it would be beneficial for all. But I don't think we can implement such checks without significant changes to the algorithms, so better documentation and require a user to pass bounds to an optimizer is the what we can do.

@Cryoris
Copy link
Contributor

Cryoris commented Mar 7, 2023

What bounds a user wants to set could depend on the optimization problem rather than just the ansatz. Therefore I think your above suggestion of removing the bounds from the ansatz completely and have it as argument on the VQE, analogous as initial_point, would be the most general and flexible solution 🙂

@woodsp-ibm
Copy link
Member Author

require a user to pass bounds to an optimizer is the what we can do.

I had considered that approach (I.e pass bounds to optimizer so it was done in one place) - but it felt a bit disjointed with initial point (x0) being set via the algorithm. Yes it would require more changes to touch the set of variational algos where this applied if done in VQE etc. rather than if done in optimizers. For the minimizer this way you would have to arrange some partial to set the bounds and ignore what came through the callable that VQE etc use.

callable we can't verify if the bounds are set properly and the algorithm can't enforce them from the ansatz parameters

While we might be able to give a more meaningful message when an optimizer instance is used, in the case of bounds reqd and a mismatch, if you used the callable it would come down to the underlying codes message - since its a bit more of an advanced usage with the callable, its probably ok. This would be what happens if you use a scipy optimizer directly, I think it should be ok. We cannot cater to everything.

If we could have a single place in the codebase where we could verify and set bounds for all variational algorithms it would be beneficial for all.

This is used presently https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/algorithms/utils/validate_bounds.py as a function to validate. A common place to set - that's the first part in this comment above.

@rlosadagarcia
Copy link

rlosadagarcia commented Mar 10, 2023

After thinking about it, I have some ideas. However, I want to clarify that as I don't have an extensive experience coding at this level and I am not familiar with the structure of some QisKit algorithm classes, I am unsure about the feasibility of my approach and whether it would be compatible with how optimizers are implemented throughout other classes.

The general approach that occurs to me would also allow for the jac argument to be implemented (I think). It wouldn't use the validate_bounds option that currently delivers bounds to VQE for instance.

  • First have some distinction set up to distinguish what avaliable optimizers support bounds. I think that the scipy optimizers have instances related to whether bounds are supported that could help in this matter.
  • Since all optimizers have an options dictionary that can be provided by the user when launching the optimizer class, 'bounds' and 'jac' could be specified in this options dictionary when calling the optimizer, to later be used. I believe that having extra options in the dictionary that aren't recognized by the scipy.optimizer does not affect the optimizer or the optimization process. If they do so, they could be removed upon calling the "minimize" method.
  • When calling the optimizer.minimize method, as these arguments should be specified, both could either be popped and passed to the minimize (with the optional possibility of updating the options settings through the set_options instance of the optimizer if needed) or simply obtained from the options dictionary. If the ._options instance (which contains the options dictionary after initializing the optimizer class) does not have the keys 'bounds' or 'jac', they are set to 'None' by default.

A schematic and simplistic approach would be modifying the minimize method as follows:

opt_results = self._optimizer.minimize(                             
       fun=cost_function,
       x0=self._initial_point,
       jac=self._optimizer._options.pop('jac', None),
       bounds=self._optimizer._options.pop('bounds', None)
)

@mrossinek
Copy link
Member

It was on my todo list to add my 2 cents here and I had a quick chat with @Cryoris earlier today. So let me summarize as short as possible:

We think that adding an optional argument for bounds parallel to how we do it with the initial_point is the best way forward.


Some arguments against other approaches:

  1. Adding the bounds to the Optimizer/Minimizer interfaces does not work because (as also pointed out above) this can be a simple callable and thus will overcomplicate things unnecessarily. It also would be confusing to treat initial_point and bounds differently in the API.

  2. Using a simple keyword-dictionary is something we always try to avoid because it is worse to document and also does not provide "user-discoverability" for example during auto-completion or suggestions provided by your editor. Proper keyword arguments instead will make the programmers life simpler in this regard.

@adekusar-drl
Copy link
Contributor

After some thoughts, I guess introduction of bounds in the same way as initial_point is not a great idea. In this case we have to update all variational algorithms to make use of this additional parameter and pass it to optimizer. And a new algorithm has to do the same again. This is not scalable and prone to errors. I'd rather prefer to move initial_point and bounds to optimizer itself. In this case, whatever algorithm we have, it would benefit from the existing code that handles initial_point and bounds. Well, if a user has a callable, then it is a problem of the user.

@Cryoris
Copy link
Contributor

Cryoris commented Mar 10, 2023

In this case we have to update all variational algorithms to make use of this additional parameter and pass it to optimizer.

If we were to put the bounds and initial point to the optimizer we would have to change both (1) all optimizers and (2) all variational algorithms (because currently we pass bounds when we call the optimizer). Plus we'd be inconsistent with the SciPy interface, which we match on purpose.

I don't think it's a big problem to change the variational algorithms, these would only be (Sampling)VQE and VQD right?

Well, if a user has a callable, then it is a problem of the user.

I think this makes it our problem too 😛

@adekusar-drl
Copy link
Contributor

If we were to put the bounds and initial point to the optimizer we would have to change both (1) all optimizers and (2) all variational algorithms (because currently we pass bounds when we call the optimizer).

You update (1) and (2) only once and then all new algorithms have support of initial_point and bounds out of box.

Plus we'd be inconsistent with the SciPy interface, which we match on purpose.

We can't match the SciPy interface cause it is purely imperative almost without classes while we have classes for optimizers. A callable can be an optimizer and this is where we match their interface.

I don't think it's a big problem to change the variational algorithms, these would only be (Sampling)VQE and VQD right?

And everything in QML :)
And all other new algorithms as I mentioned.

@Cryoris
Copy link
Contributor

Cryoris commented Mar 10, 2023

You update (1) and (2) only once and then all new algorithms have support of initial_point and bounds out of box.

In any approach all algorithms have to be updated once 😄

A callable can be an optimizer and this is where we match their interface.

Yeah exactly and this I believe is something we want to maintain, as we introduced that as guideline for the optimizers 🙂 Also, if we added __call__ to the Qiskit optimizers, we could use them interchangeably with the SciPy ones (like Steve already suggested a few times but we never got around to do it).

And everything in QML :)

Wouldn't we have to change the QML algorithms any way if we change how the optimizers are called? Of course it's a bit annoying to change the signature of the QML algorithms, but I would think the effort is worth it to be consistent for the future 😄

@ElePT
Copy link
Contributor

ElePT commented Aug 22, 2023

Transferred to new repo: qiskit-community/qiskit-algorithms#57

@ElePT ElePT closed this as not planned Won't fix, can't repro, duplicate, stale Aug 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mod: algorithms Related to the Algorithms module type: feature request New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants