Releases · keras-team/keras
Keras 3.11.2 Keras 3.11.1 Keras 3.11.0 What's Changedfit()
/evaluate()
/predict()
.keras.ops.kaiser
function.keras.ops.hanning
function.keras.ops.cbrt
function.keras.ops.deg2rad
function.keras.ops.layer_normalization
function to leverage backend-specific performance optimizations.Flatten
layer.Full Changelog: v3.10.0...v3.11.0
Keras 3.10.0 New featuresmodel.save()
. It is controlled via the max_shard_size
argument. Specifying this argument will split your Keras model weight file into chunks of this size at most. Use load_model()
to reload the sharded files.keras.optimizers.Muon
keras.layers.RandomElasticTransform
keras.losses.CategoricalGeneralizedCrossEntropy
(with functional version keras.losses.categorical_generalized_cross_entropy
)axis
argument to SparseCategoricalCrossentropy
lora_alpha
to all LoRA-enabled layers. If set, this parameter scales the low-rank adaptation delta during the forward pass.keras.activations.sparse_sigmoid
keras.ops.image.elastic_transform
keras.ops.angle
keras.ops.bartlett
keras.ops.blackman
keras.ops.hamming
keras.ops.view_as_complex
, keras.ops.view_as_real
tf.RaggedTensor
support to Embedding
layersynchronization
argumentFull Changelog: v3.9.0...v3.10.0
Keras 3.9.2 What's ChangedFull Changelog: v3.9.1...v3.9.2
Keras 3.9.1 What's ChangedFull Changelog: v3.9.0...v3.9.1
Keras 3.9.0 New featureskeras.RematScope
and keras.remat
. It can be used to turn on rematerizaliation for certain layers in fine-grained manner, e.g. only for layers larger than a certain size, or for a specific set of layers, or only for activations.keras.ops.rot90
keras.ops.rearrange
(Einops-style)keras.ops.signbit
keras.ops.polar
keras.ops.image.perspective_transform
keras.ops.image.gaussian_blur
keras.layers.RMSNormalization
keras.layers.AugMix
keras.layers.CutMix
keras.layers.RandomInvert
keras.layers.RandomErasing
keras.layers.RandomGaussianBlur
keras.layers.RandomPerspective
dtype
argument to JaxLayer
and FlaxLayer
layersBinaryAccuracy
metricantialias
argument to keras.layers.Resizing
layer.npz
model files (numpy format). Thanks to Peng Zhou for reporting the vulnerability.Full Changelog: v3.8.0...v3.9.0
Keras 3.8.0 New: OpenVINO backendOpenVINO is now available as an infererence-only Keras backend. You can start using it by setting the backend
field to "openvino"
in your keras.json
config file.
OpenVINO is a deep learning inference-only framework tailored for CPU (x86, ARM), certain GPUs (OpenCL capable, integrated and discrete) and certain AI accelerators (Intel NPU).
Because OpenVINO does not support gradients, you cannot use it for training (e.g. model.fit()
) -- only inference. You can train your models with the JAX/TensorFlow/PyTorch backends, and when trained, reload them with the OpenVINO backend for inference on a target device supported by OpenVINO.
You can now export your Keras models to the ONNX format from the JAX, TensorFlow, and PyTorch backends.
Just pass format="onnx"
in your model.export()
call:
# Export the model as a ONNX artifact model.export("path/to/location", format="onnx") # Load the artifact in a different process/environment ort_session = onnxruntime.InferenceSession("path/to/location") # Run inference ort_inputs = { k.name: v for k, v in zip(ort_session.get_inputs(), input_data) } predictions = ort_session.run(None, ort_inputs)New: Scikit-Learn API compatibility interface
It's now possible to easily integrate Keras models into Sciki-Learn pipelines! The following wrapper classes are available:
keras.wrappers.SKLearnClassifier
: implements the sklearn Classifier
APIkeras.wrappers.SKLearnRegressor
: implements the sklearn Regressor
APIkeras.wrappers.SKLearnTransformer
: implements the sklearn Transformer
APIkeras.ops.diagflat
keras.ops.unravel_index
sparse_plus
activationsparsemax
activationkeras.layers.RandAugment
keras.layers.Equalization
keras.layers.MixUp
keras.layers.RandomHue
keras.layers.RandomGrayscale
keras.layers.RandomSaturation
keras.layers.RandomColorJitter
keras.layers.RandomColorDegeneration
keras.layers.RandomSharpness
keras.layers.RandomShear
axis
to tversky
losskeras.random.shuffle
XLA compilablemodel.export()
and keras.export.ExportArchive
with the PyTorch backend, supporting both the TF SavedModel format and the ONNX format.Full Changelog: v3.7.0...v3.8.0
Keras 3.7.0 API changesflash_attention
argument to keras.ops.dot_product_attention
and to keras.layers.MultiHeadAttention
.keras.layers.STFTSpectrogram
layer (to extract STFT spectrograms from inputs as a preprocessing step) as well as its initializer keras.initializers.STFTInitializer
.celu
, glu
, log_sigmoid
, hard_tanh
, hard_shrink
, squareplus
activations.keras.losses.Circle
loss.keras.visualization.draw_bounding_boxes
, keras.visualization.draw_segmentation_masks
, keras.visualization.plot_image_gallery
, keras.visualization.plot_segmentation_mask_gallery
.double_checkpoint
argument to BackupAndRestore
to save a fallback checkpoint in case the first checkpoint gets corrupted.CenterCrop
, RandomFlip
, RandomZoom
, RandomTranslation
, RandomCrop
.keras.ops.exp2
, keras.ops.inner
operations.bias_add
.Full Changelog: v3.6.0...v3.7.0
Keras 3.6.0 Highlightskeras.saving.KerasFileEditor
. Use it to inspect, diff, modify and resave Keras weights files. See basic workflow here.keras.utils.Config
class for managing experiment config parameters.keras.utils.get_file
, with extract=True
or untar=True
, the return value will be the path of the extracted directory, rather than the path of the archive.fit()
, evaluate()
, predict()
. This enables 100% compact stacking of train_step
calls on accelerators (e.g. when running small models on TPU).
on_batch_end
, this will disable async logging. You can force it back by adding self.async_safe = True
to your callbacks. Note that the TensorBoard
callback isn't considered async safe by default. Default callbacks like the progress bar are async safe.keras.saving.KerasFileEditor
utility to inspect, diff, modify and resave Keras weights file.keras.utils.Config
class. It behaves like a dictionary, with a few nice features:
config.foo = 2
or config["foo"]
are both valid)config.to_json()
.config.freeze()
.bitwise_and
bitwise_invert
bitwise_left_shift
bitwise_not
bitwise_or
bitwise_right_shift
bitwise_xor
keras.ops.logdet
.keras.ops.trunc
.keras.ops.dot_product_attention
.keras.ops.histogram
.PyDataset
instances to use multithreading.verbose
in keras.saving.ExportArchive.write_out()
method for exporting TF SavedModel.epsilon
argument in keras.ops.normalize
.Model.get_state_tree()
method for retrieving a nested dict mapping variable paths to variable values (either as numpy arrays or backend tensors (default)). This is useful for rolling out custom JAX training loops.keras.layers.AutoContrast
, keras.layers.Solarization
.keras.layers.Pipeline
class, to apply a sequence of layers to an input. This class is useful to build a preprocessing pipeline. Compared to a Sequential
model, Pipeline
features a few important differences:
Model
, just a plain layer.tf.data
, the pipeline will also remain tf.data
compatible, independently of the backend you use.Full Changelog: v3.5.0...v3.6.0
You can’t perform that action at this time.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4