You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is similar question to 'Fall back to CPU for jax.scipy.linalg.schur?' #12245
and related to 'pure_callback for eigendecomposition' #14199
I am trying to use GPU or TPU for matrix operation except eigen-decomposition which is jitted on CPU since it's not implemented.
Below is the code I used to do it on CPU. It worked in GPU backend but failed in TPU backend with following error NotImplementedError: 64-bit types not supported..
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
This is similar question to 'Fall back to CPU for jax.scipy.linalg.schur?' #12245
and related to 'pure_callback for eigendecomposition' #14199
I am trying to use GPU or TPU for matrix operation except eigen-decomposition which is jitted on CPU since it's not implemented.
Below is the code I used to do it on CPU. It worked in GPU backend but failed in TPU backend with following error
NotImplementedError: 64-bit types not supported.
.Is there a workaround that I can enforce eigen-decomposition on CPU while others run on TPU?
Thanks for reading.
Beta Was this translation helpful? Give feedback.
All reactions