Releases: keras-team/keras
Keras 3.5.0
What's Changed
- Add integration with the Hugging Face Hub. You can now save models to Hugging Face Hub directly from
keras.Model.save()
and load.keras
models directly from Hugging Face Hub withkeras.saving.load_model()
. - Ensure compatibility with NumPy 2.0.
- Add
keras.optimizers.Lamb
optimizer. - Improve
keras.distribution
API support for very large models. - Add
keras.ops.associative_scan
op. - Add
keras.ops.searchsorted
op. - Add
keras.utils.PyDataset.on_epoch_begin()
method. - Add
data_format
argument tokeras.layers.ZeroPadding1D
layer. - Bug fixes and performance improvements.
Full Changelog: v3.4.1...v3.5.0
Keras 3.4.1
This is a minor bugfix release.
Keras 3.4.0
Highlights
- Add support for arbitrary, deeply nested input/output structures in Functional models (e.g. dicts of dicts of lists of inputs or outputs...)
- Add support for optional Functional inputs.
- Introduce
keras.dtype_policies.DTypePolicyMap
for easy configuration of dtype policies of nested sublayers of a subclassed layer/model. - New ops:
keras.ops.argpartition
keras.ops.scan
keras.ops.lstsq
keras.ops.switch
keras.ops.dtype
keras.ops.map
keras.ops.image.rgb_to_hsv
keras.ops.image.hsv_to_rgb
What's changed
- Add support for
float8
inference forDense
andEinsumDense
layers. - Add custom
name
argument in all Keras Applications models. - Add
axis
argument inkeras.losses.Dice
. - Enable
keras.utils.FeatureSpace
to be used in atf.data
pipeline even when the backend isn't TensorFlow. StringLookup
layer can now taketf.SparseTensor
as input.Metric.variables
is now recursive.- Add
training
argument toModel.compute_loss()
. - Add
dtype
argument to all losses. keras.utils.split_dataset
now supports nested structures in dataset.- Bugs fixes and performance improvements.
Full Changelog: v3.3.3...v3.4.0
Keras 3.3.3
This is a minor bugfix release.
Keras 3.3.2
This is a simple fix release that re-surfaces legacy Keras 2 APIs that aren't part of Keras package proper, but that are still featured in tf.keras
. No other content has changed.
Keras 3.3.1
This is a simple fix release that moves the legacy _tf_keras
API directory to the root of the Keras pip package. This is done in order to preserve import paths like from tensorflow.keras import layers
without making any changes to the TensorFlow API files.
No other content has changed.
Keras 3.3.0
What's Changed
- Introduce float8 training.
- Add LoRA to ConvND layers.
- Add
keras.ops.ctc_decode
for JAX and TensorFlow. - Add
keras.ops.vectorize
,keras.ops.select
. - Add
keras.ops.image.rgb_to_grayscale
. - Add
keras.losses.Tversky
loss. - Add full
bincount
anddigitize
sparse support. - Models and layers now return owned metrics recursively.
- Add pickling support for Keras models. Note that pickling is not recommended, prefer using Keras saving APIs.
- Bug fixes and performance improvements.
In addition, the codebase structure has evolved:
- All source files are now in
keras/src/
. - All API files are now in
keras/api/
. - The codebase structure stays unchanged when building the Keras pip package. This means you can
pip install
Keras directly from the GitHub sources.
New Contributors
- @kapoor1992 made their first contribution in #19484
- @IMvision12 made their first contribution in #19393
- @alanwilter made their first contribution in #19438
- @chococigar made their first contribution in #19323
- @LukeWood made their first contribution in #19555
- @AlexanderLavelle made their first contribution in #19575
Full Changelog: v3.2.1...v3.3.0
Keras 3.2.1
Keras 3.2.0
What changed
- Introduce QLoRA-like technique for LoRA fine-tuning of
Dense
andEinsumDense
layers (thereby any LLM) in int8 precision. - Extend
keras.ops.custom_gradient
support to PyTorch. - Add
keras.layers.JaxLayer
andkeras.layers.FlaxLayer
to wrap JAX/Flax modules as Keras layers. - Allow
save_model
&load_model
to accept a file-like object. - Add quantization support to the
Embedding
layer. - Make it possible to update metrics inside a custom
compute_loss
method with all backends. - Make it possible to access
self.losses
inside a customcompute_loss
method with the JAX backend. - Add
keras.losses.Dice
loss. - Add
keras.ops.correlate
. - Make it possible to use cuDNN LSTM & GRU with a mask with the TensorFlow backend.
- Better JAX support in
model.export()
: add support for aliases, finer control overjax2tf
options, and dynamic batch shapes. - Bug fixes and performance improvements.
New Contributors
- @abhaskumarsinha made their first contribution in #19302
- @qaqland made their first contribution in #19378
- @tvogel made their first contribution in #19310
- @lpizzinidev made their first contribution in #19409
- @Murhaf made their first contribution in #19444
Full Changelog: v3.1.1...v3.2.0
Keras 3.1.1
This is a minor bugfix release over 3.1.0.
What's Changed
- Unwrap variable values in all stateless calls. by @hertschuh in #19287
- Fix
draw_seed
causing device discrepancy issue duringtorch
's symbolic execution by @KhawajaAbaid in #19289 - Fix TestCase.run_layer_test for multi-output layers by @shkarupa-alex in #19293
- Sine docstring by @grasskin in #19295
- Fix
keras.ops.softmax
for the tensorflow backend by @tirthasheshpatel in #19300 - Fix mixed precision check in TestCase.run_layer_test: compare with output_spec dtype instead of hardcoded float16 by @shkarupa-alex in #19297
- ArrayDataAdapter no longer converts to NumPy and supports sparse tens⦠by @hertschuh in #19298
- add token to codecov by @haifeng-jin in #19312
- Add Tensorflow support for variable
scatter_update
in optimizers. by @hertschuh in #19313 - Replace
dm-tree
withoptree
by @james77777778 in #19306 - downgrade codecov to v3 by @haifeng-jin in #19319
- Allow tensors in
tf.Dataset
s to have different dimensions. by @hertschuh in #19318 - update codecov setting by @haifeng-jin in #19320
- Set dtype policy for uint8 by @sampathweb in #19327
- Use Value dim shape for Attention compute_output_shape by @sampathweb in #19284
New Contributors
- @tirthasheshpatel made their first contribution in #19300
Full Changelog: v3.1.0...v3.1.1