Releases: bdusell/semiring-einsum
Releases · bdusell/semiring-einsum
v1.2.0
Version 1.2.0.
New features:
- Added support for PyTorch 2.0 and above.
- When automatic block sizing based on available GPU memory is used, if
it is estimated that no memory is left, instead of raising an
exception, just try a block size of 1 and hope for the best.
Bug fixes:
- Automatic block sizing now works for tensors of dtype bool.
- Fixed problems with empty argmax tensors in the Viterbi semiring
(wrong dtype and device).
v1.1.0
New features and performance improvements:
- The
block_size
argument can now be omitted; by default, an appropriate
block size will be chosen based on the amount of available memory. - Sped up log einsum by saving some tensors from the forward pass that
can be reused in the backward pass, at the cost of more memory usage.
The old, slower behavior can be restored using thesave_max
and
save_sumexpsub
options. - Sped up log einsum by using
torch.Tensor._nan_to_num()
to clip
infinite values. Use PyTorch 1.8.0 or later to take advantage of this. - Sped up log einsum by using
torch.amax()
to compute maximum values
over multiple dimensions at once. Use PyTorch 1.7.0 or later to take
advantage of this. - Added an option to log einsum to avoid NaNs in the gradient when all terms
are -inf. By default, the gradient of a summation whose output is -inf
is still NaN (same astorch.logsumexp()
), but the newgrad_of_neg_inf
option can now be used to set it to 0 instead. - Added support for earlier versions of PyTorch (1.1.0 and later) and
Python (3.6.1 and later). - Added a
py.typed
file for compatibility with type checking tools like
mypy.
Bug fixes:
- 0-dimensional inputs no longer cause exceptions to be raised.
- Log einsum now handles input values of +inf.
- Log Viterbi einsum no longer raises an exception when there are no
summed variables or when the output is 0-dimensional.