Force backend computation dtype to float #64
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
💬 Pull Request Description
Based on #41, a reflexion was carried out regarding the difference between the models' dtype and the computation dtype.
Because the oscillators in the SB backend are in [-1, 1], the backend computation dtype must be a float. Besides, some key PyTorch methods are not available for
float16
so onlyfloat32
andfloat64
are considered.Thus, the core Ising model which is used by the SB optimizer can only be defined with
float32
(default) orfloat64
dtype, any other dtype will raise aValueError
.However,
QuadraticPolynomial
s can still be of any dtype. When such a polynomial is converted to an Ising model, a newdtype
argument must be passed to indicate the dtype of the Ising model that will be generated. When callingQuadraticPolynomial::optimize
,QuadraticPolynomial::maximize
orQuadraticPolynomial::minimize
, adtype
argument must also be provided to indicate the backend dtype (for equivalent Ising model and thus SB optimizer computations).Once the optimal spins are retrieved by the polynomial at the end of the optimization and converted to integer values according to the optimization domain, the tensors are converted to the polynomial's dtype.
When passing the
dtype
parameter to thesb.maximize
/sb.minimize
functions, the dtype will be used as theQuadraticPolynomial
's dtype and as the optimization dtype. If not provided,float32
will be used.Finally, for
Polynomial
s, if thedtype
and/ordevice
parameters are set to None, the default dtype and/or device of PyTorch will be used.✔️ Check list
🚀 New features
None.
🐞 Bug fixes
None.
📣 Supplementary information
None.