Releases: e3nn/e3nn-jax
Releases Β· e3nn/e3nn-jax
0.20.7
2024-01-26
Added
e3nn.where
function- Add optional
mask
argument ine3nn.flax.BatchNorm
Changed
- replace
jnp.ndarray
byjax.Array
2024-01-05
Added
e3nn.ones
ande3nn.ones_like
functionse3nn.equinox
submodule
Fixed
- python 3.9 compatibility
Thanks to @ameya98, @SauravMaheshkar and @pabloferz
2023-12-24
2023-11-17
Added
e3nn.flax.BatchNorm
e3nn.scatter_mean
- Add
e3nn.utils.vmap
also directly toe3nn
module:e3nn.vmap
2023-09-25
Added
with_bias
argument toe3nn.haiku.MultiLayerPerceptron
ande3nn.flax.MultiLayerPerceptron
Fixed
- Improve compilation speed and stability of
s2grid
for largelmax
(useis_normalized=True
inlpmn_values
)
2023-09-13
Changelog
Changed
- Add back the optimizations with the lazy
._chunks
that was removed in 0.19.0
2023-09-09
Highlight
tl;dr Mostly fix the issue #38
In version 0.19.0
, I removed the lazy _list
attribute of IrrepsArray
to fix the issues from tree_util
, grad
and vmap
.
In this version (0.20.0
) I found a way to put back that lazy attribute, now called _chunks
, in a way that does not interfere with tree_util
, grad
and vmap
. _chunks
is tropped when using tree_util
, grad
and vmap
unless you use e3nn.vmap
.
ChangeLog
Added
e3nn.Irreps.mul_gcd
e3nn.IrrepsArray.extend_with_zeros
to extend an array with zeros, can be useful for residual connections
Changed
- rewrite
e3nn.tensor_square
to be simpler (and faster?) - use
jax.scipy.special.lpmn_values
to implemente3nn.legendre
. Faster on GPU and supports reverse-mode differentiation. - [BREAKING] Change the output format of
e3nn.legendre
!
Fixed
- Add back a lazy
._chunks
ine3nn.IrrepsArray
to fix issue #38
2023-06-24
Changelog
Fixed
- Fix missing support for zero flags in
e3nn.elementwise_tensor_product
2023-06-23
By merging two jnp.einsum
in one, the tensor product is faster than before (60% faster in the case I tested, see BENCHMARK.md
).
Changelog
Changed
- [BREAKING] Move
Instruction
,FunctionalTensorProduct
andFunctionalFullyConnectedTensorProduct
intoe3nn.legacy
submodule - Reimplement
e3nn.tensor_product
ande3nn.elementwise_tensor_product
in a simpler way