Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADAM bug when calculating the gradient in batches #178

Open
proeseler opened this issue May 16, 2024 · 2 comments
Open

ADAM bug when calculating the gradient in batches #178

proeseler opened this issue May 16, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@proeseler
Copy link
Contributor

proeseler commented May 16, 2024

Environment

  • Qiskit Algorithms version: 1.0.1
  • Python version: 3.11
  • Operating system: Linux

What is happening?

There is only one small error that prevents ADAM from calculating the gradient_num_diff for batches with max_evals_grouped. ADAM only gives "fun, self._eps" as argument when calling gradient_num_diff. This leads to max_evals_grouped=None, which leads to max_evals_grouped=1. Therefore, regardless of the call to set max_evals_grouped, max_evals_grouped=1 will always apply for ADAM.
Optimizer class method minimize:

image

ADAM class method gradient_num_diff:

image

Here are the corresponding files:

How can we reproduce the issue?

Create the ADAM optimizer. Call set_max_evals_grouped with any limit with the optimizer. Now call minimize with the optimizer and you will not notice any change in the runtime/CPU usage.

What should happen?

I could quickly fix the error by adding the missing argument.

Any suggestions?

No response

@proeseler proeseler added the bug Something isn't working label May 16, 2024
@woodsp-ibm
Copy link
Member

So that would simply be to add self._max_evals_grouped to the call. If you would like to contribute the change that would be great. Would you though please add a unit test that makes sure things work if max evals grouped is set when using Adam in that the outcome is as expected still - and a release note (reno) to note the fix. As to how a set of points affects CPU depends on how things are computed by the function, locally they could be parallel processed, if remote it was done to reduce queuing overhead by allowing bundling circuits for each point in the gradient into a single call/run.

@proeseler
Copy link
Contributor Author

I will do so👍

woodsp-ibm added a commit that referenced this issue Aug 20, 2024
Co-authored-by: Peter Röseler <peter.roeseler@gmail.com>
Co-authored-by: Steve Wood <40241007+woodsp-ibm@users.noreply.github.com>
Co-authored-by: Elena Peña Tapia <57907331+ElePT@users.noreply.github.com>
mergify bot pushed a commit that referenced this issue Aug 20, 2024
Co-authored-by: Peter Röseler <peter.roeseler@gmail.com>
Co-authored-by: Steve Wood <40241007+woodsp-ibm@users.noreply.github.com>
Co-authored-by: Elena Peña Tapia <57907331+ElePT@users.noreply.github.com>
(cherry picked from commit 6724b47)
mergify bot added a commit that referenced this issue Aug 20, 2024
#200)

Co-authored-by: Peter Roeseler <148069723+proeseler@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants