Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Force backend computation dtype to float #64

Closed
wants to merge 3 commits into from
Closed

Conversation

bqth29
Copy link
Owner

@bqth29 bqth29 commented Apr 20, 2024

💬 Pull Request Description

Based on #41, a reflexion was carried out regarding the difference between the models' dtype and the computation dtype.

Because the oscillators in the SB backend are in [-1, 1], the backend computation dtype must be a float. Besides, some key PyTorch methods are not available for float16 so only float32 and float64 are considered.

Thus, the core Ising model which is used by the SB optimizer can only be defined with float32 (default) or float64 dtype, any other dtype will raise a ValueError.

However, QuadraticPolynomials can still be of any dtype. When such a polynomial is converted to an Ising model, a new dtype argument must be passed to indicate the dtype of the Ising model that will be generated. When calling QuadraticPolynomial::optimize, QuadraticPolynomial::maximize or QuadraticPolynomial::minimize, a dtype argument must also be provided to indicate the backend dtype (for equivalent Ising model and thus SB optimizer computations).

Once the optimal spins are retrieved by the polynomial at the end of the optimization and converted to integer values according to the optimization domain, the tensors are converted to the polynomial's dtype.

When passing the dtype parameter to the sb.maximize/sb.minimize functions, the dtype will be used as the QuadraticPolynomial's dtype and as the optimization dtype. If not provided, float32 will be used.

We concluded that two dtype parameters are not useful for sb.maximize and sb.minimize since the model is only used for optimization and not retrieved in any manner. Thus, using the computation dtype as the model's dtype is acceptable. If users really want to create the model with a specific dtype, they can first build it with build_model and then call one of three optimization methods.

Finally, for Polynomials, if the dtype and/or device parameters are set to None, the default dtype and/or device of PyTorch will be used.

✔️ Check list

  • The code matches the styling rules
  • The new code is covered by relevant tests
  • Documentation was added

🚀 New features

None.

🐞 Bug fixes

None.

📣 Supplementary information

None.

@bqth29 bqth29 linked an issue Apr 20, 2024 that may be closed by this pull request
Copy link

codecov bot commented Apr 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 100.00%. Comparing base (6194a9e) to head (9ff455d).
Report is 6 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##              main       #64   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           36        36           
  Lines         1600      1612   +12     
=========================================
+ Hits          1600      1612   +12     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@bqth29 bqth29 changed the title Force backend computation to float Force backend computation dtype to float Apr 20, 2024
@bqth29 bqth29 added ready This PR is ready to be reviewed refactoring Existing code is refactored (no new feature, no breaking change) and removed ready This PR is ready to be reviewed labels May 28, 2024
@bqth29
Copy link
Owner Author

bqth29 commented Sep 29, 2024

Overriden by #78

@bqth29 bqth29 added the duplicate This issue or pull request already exists label Sep 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists refactoring Existing code is refactored (no new feature, no breaking change)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SB Optimizer computation dtype v. Model dtype
1 participant