The mlflow.tensorflow
module provides an API for logging and loading TensorFlow models. This module exports TensorFlow models with the following flavors:
This is the main flavor that can be loaded back into TensorFlow.
mlflow.pyfunc
Produced for use by generic pyfunc-based deployment tools and batch inference.
Note
Autologging is known to be compatible with the following package versions: 2.13.1
<= tensorflow
<= 2.19.0
. Autologging may not succeed when used with package versions outside of this range.
Enables autologging for tf.keras
. Note that only tensorflow>=2.3
are supported. As an example, try running the Keras/TensorFlow example.
For each TensorFlow module, autologging captures the following information:
Metrics and Parameters
Training and validation loss.
User-specified metrics.
Optimizer config, e.g., learning_rate, momentum, etc.
Training configs, e.g., epochs, batch_size, etc.
Artifacts
Model summary on training start.
Saved Keras model in MLflow Model format.
TensorBoard logs on training end.
Metrics and Parameters
Metrics from the
EarlyStopping
callbacks:stopped_epoch
,restored_epoch
,restore_best_weight
, etc
fit()
orfit_generator()
parameters associated withEarlyStopping
:min_delta
,patience
,baseline
,restore_best_weights
, etc
Refer to the autologging tracking documentation for more information on TensorFlow workflows.
Note that autologging cannot be used together with explicit MLflow callback, i.e., mlflow.tensorflow.MlflowCallback, because it will cause the same metrics to be logged twice. If you want to include mlflow.tensorflow.MlflowCallback in the callback list, please turn off autologging by calling mlflow.tensorflow.autolog(disable=True).
log_models â If True
, trained models are logged as MLflow model artifacts. If False
, trained models are not logged.
log_datasets â If True
, dataset information is logged to MLflow Tracking. If False
, dataset information is not logged.
disable â If True
, disables the TensorFlow autologging integration. If False
, enables the TensorFlow integration autologging integration.
exclusive â If True
, autologged content is not logged to user-created fluent runs. If False
, autologged content is logged to the active fluent run, which may be user-created.
disable_for_unsupported_versions â If True
, disable autologging for versions of tensorflow that have not been tested against this version of the MLflow client or are incompatible.
silent â If True
, suppress all event logs and warnings from MLflow during TensorFlow autologging. If False
, show all events and warnings during TensorFlow autologging.
registered_model_name â If given, each time a model is trained, it is registered as a new model version of the registered model with this name. The registered model is created if it does not already exist.
log_input_examples â If True
, input examples from training datasets are collected and logged along with tf/keras model artifacts during training. If False
, input examples are not logged.
log_model_signatures â If True
, ModelSignatures
describing model inputs and outputs are collected and logged along with tf/keras model artifacts during training. If False
, signatures are not logged. Note that logging TensorFlow models with signatures changes their pyfunc inference behavior when Pandas DataFrames are passed to predict()
. When a signature is present, an np.ndarray
(for single-output models) or a mapping from str
-> np.ndarray
(for multi-output models) is returned; when a signature is not present, a Pandas DataFrame is returned.
saved_model_kwargs â a dict of kwargs to pass to tensorflow.saved_model.save
method.
keras_model_kwargs â a dict of kwargs to pass to keras_model.save
method.
extra_tags â A dictionary of extra tags to set on each managed run created by autologging.
log_every_epoch â If True, training metrics will be logged at the end of each epoch.
log_every_n_steps â If set, training metrics will be logged every n training steps. log_every_n_steps must be None when log_every_epoch=True.
checkpoint â Enable automatic model checkpointing.
checkpoint_monitor â In automatic model checkpointing, the metric name to monitor if you set model_checkpoint_save_best_only to True.
checkpoint_mode â one of {âminâ, âmaxâ}. In automatic model checkpointing, if save_best_only=True, the decision to overwrite the current save file is made based on either the maximization or the minimization of the monitored quantity.
checkpoint_save_best_only â If True, automatic model checkpointing only saves when the model is considered the âbestâ model according to the quantity monitored and previous checkpoint model is overwritten.
checkpoint_save_weights_only â In automatic model checkpointing, if True, then only the modelâs weights will be saved. Otherwise, the optimizer states, lr-scheduler states, etc are added in the checkpoint too.
checkpoint_save_freq â âepochâ or integer. When using âepochâ, the callback saves the model after each epoch. When using integer, the callback saves the model at end of this many batches. Note that if the saving isnât aligned to epochs, the monitored metric may potentially be less reliable (it could reflect as little as 1 batch, since the metrics get reset every epoch). Defaults to âepochâ.
The default Conda environment for MLflow Models produced by calls to save_model()
and log_model()
.
A list of default pip requirements for MLflow Models produced by this flavor. Calls to save_model()
and log_model()
produce a pip environment that, at minimum, contains these requirements.
A live reference to the global dictionary of custom objects.
If you enable âcheckpointâ in autologging, during Keras model training execution, checkpointed models are logged as MLflow artifacts. Using this API, you can load the checkpointed model.
If you want to load the latest checkpoint, set both epoch and global_step to None. If âcheckpoint_save_freqâ is set to âepochâ in autologging, you can set epoch param to the epoch of the checkpoint to load specific epoch checkpoint. If âcheckpoint_save_freqâ is set to an integer in autologging, you can set global_step param to the global step of the checkpoint to load specific global step checkpoint. epoch param and global_step canât be set together.
model â A Keras model, this argument is required only when the saved checkpoint is âweight-onlyâ.
run_id â The id of the run which model is logged to. If not provided, current active run is used.
epoch â The epoch of the checkpoint to be loaded, if you set âcheckpoint_save_freqâ to âepochâ.
global_step â The global step of the checkpoint to be loaded, if you set âcheckpoint_save_freqâ to an integer.
The instance of a Keras model restored from the specified checkpoint.
import mlflow mlflow.tensorflow.autolog(checkpoint=True, checkpoint_save_best_only=False) model = create_tf_keras_model() # Create a Keras model with mlflow.start_run() as run: model.fit(data, label, epoch=10) run_id = run.info.run_id # load latest checkpoint model latest_checkpoint_model = mlflow.tensorflow.load_checkpoint(run_id=run_id) # load history checkpoint model logged in second epoch checkpoint_model = mlflow.tensorflow.load_checkpoint(run_id=run_id, epoch=2)
Load an MLflow model that contains the TensorFlow flavor from the specified path.
model_uri â
The location, in URI format, of the MLflow model. For example:
/Users/me/path/to/local/model
relative/path/to/local/model
s3://my_bucket/path/to/model
runs:/<mlflow_run_id>/run-relative/path/to/model
models:/<model_name>/<model_version>
models:/<model_name>/<stage>
For more information about supported URI schemes, see Referencing Artifacts.
dst_path â The local filesystem path to which to download the model artifact. This directory must already exist. If unspecified, a local output path will be created.
saved_model_kwargs â kwargs to pass to tensorflow.saved_model.load
method. Only available when you are loading a tensorflow2 core model.
keras_model_kwargs â kwargs to pass to keras.models.load_model
method. Only available when you are loading a Keras model.
A callable graph (tf.function) that takes inputs and returns inferences.
Log a TF2 core model (inheriting tf.Module) or a Keras model in MLflow Model format.
Note
If you log a Keras or TensorFlow model without a signature, inference with mlflow.pyfunc.spark_udf()
will not work unless the modelâs pyfunc representation accepts pandas DataFrames as inference inputs.
You can infer a modelâs signature by calling the mlflow.models.infer_signature()
API on features from the modelâs test dataset. You can also manually create a model signature, for example:
from mlflow.types.schema import Schema, TensorSpec from mlflow.models import ModelSignature import numpy as np input_schema = Schema( [ TensorSpec(np.dtype(np.uint64), (-1, 5), "field1"), TensorSpec(np.dtype(np.float32), (-1, 3, 2), "field2"), ] ) # Create the signature for a model that requires 2 inputs: # - Input with name "field1", shape (-1, 5), type "np.uint64" # - Input with name "field2", shape (-1, 3, 2), type "np.float32" signature = ModelSignature(inputs=input_schema)
model â The TF2 core model (inheriting tf.Module) or Keras model to be saved.
artifact_path â Deprecated. Use name instead.
custom_objects â A Keras custom_objects
dictionary mapping names (strings) to custom classes or functions associated with the Keras model. MLflow saves these custom layers using CloudPickle and restores them automatically when the model is loaded with mlflow.tensorflow.load_model()
and mlflow.pyfunc.load_model()
.
conda_env â
Either a dictionary representation of a Conda environment or the path to a conda environment yaml file. If provided, this describes the environment this model should be run in. At a minimum, it should specify the dependencies contained in get_default_conda_env(). If None
, a conda environment with pip requirements inferred by mlflow.models.infer_pip_requirements()
is added to the model. If the requirement inference fails, it falls back to using get_default_pip_requirements. pip requirements from conda_env
are written to a pip requirements.txt
file and the full conda environment is written to conda.yaml
. The following is an example dictionary representation of a conda environment:
{ "name": "mlflow-env", "channels": ["conda-forge"], "dependencies": [ "python=3.8.15", { "pip": [ "tensorflow==x.y.z" ], }, ], }
code_paths â
A list of local filesystem paths to Python file dependencies (or directories containing file dependencies). These files are prepended to the system path when the model is loaded. Files declared as dependencies for a given model should have relative imports declared from a common root path if multiple files are defined with import dependencies between them to avoid import errors when loading the model.
For a detailed explanation of code_paths
functionality, recommended usage patterns and limitations, see the code_paths usage guide.
signature â
an instance of the ModelSignature
class that describes the modelâs inputs and outputs. If not specified but an input_example
is supplied, a signature will be automatically inferred based on the supplied input example and model. To disable automatic signature inference when providing an input example, set signature
to False
. To manually infer a model signature, call infer_signature()
on datasets with valid model inputs, such as a training dataset with the target column omitted, and valid model outputs, like model predictions made on the training dataset, for example:
from mlflow.models import infer_signature train = df.drop_column("target_label") predictions = ... # compute model predictions signature = infer_signature(train, predictions)
input_example â one or several instances of valid model input. The input example is used as a hint of what data to feed the model. It will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented format, or a numpy array where the example will be serialized to json by converting it to a list. Bytes are base64-encoded. When the signature
parameter is None
, the input example is used to infer a model signature.
registered_model_name â If given, create a model version under registered_model_name
, also creating a registered model if one with the given name does not exist.
await_registration_for â Number of seconds to wait for the model version to finish being created and is in READY
status. By default, the function waits for five minutes. Specify 0 or None to skip waiting.
pip_requirements â Either an iterable of pip requirement strings (e.g. ["tensorflow", "-r requirements.txt", "-c constraints.txt"]
) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"
). If provided, this describes the environment this model should be run in. If None
, a default list of requirements is inferred by mlflow.models.infer_pip_requirements()
from the current software environment. If the requirement inference fails, it falls back to using get_default_pip_requirements. Both requirements and constraints are automatically parsed and written to requirements.txt
and constraints.txt
files, respectively, and stored as part of the model. Requirements are also written to the pip
section of the modelâs conda environment (conda.yaml
) file.
extra_pip_requirements â
Either an iterable of pip requirement strings (e.g. ["pandas", "-r requirements.txt", "-c constraints.txt"]
) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"
). If provided, this describes additional pip requirements that are appended to a default set of pip requirements generated automatically based on the userâs current software environment. Both requirements and constraints are automatically parsed and written to requirements.txt
and constraints.txt
files, respectively, and stored as part of the model. Requirements are also written to the pip
section of the modelâs conda environment (conda.yaml
) file.
Warning
The following arguments canât be specified at the same time:
conda_env
pip_requirements
extra_pip_requirements
This example demonstrates how to specify pip requirements using pip_requirements
and extra_pip_requirements
.
saved_model_kwargs â a dict of kwargs to pass to tensorflow.saved_model.save
method.
keras_model_kwargs â a dict of kwargs to pass to keras_model.save
method.
metadata â Custom metadata dictionary passed to the model and stored in the MLmodel file.
name â Model name.
params â A dictionary of parameters to log with the model.
tags â A dictionary of tags to log with the model.
model_type â The type of the model.
step â The step at which to log the model outputs and metrics
model_id â The ID of the model.
A ModelInfo
instance that contains the metadata of the logged model.
Save a TF2 core model (inheriting tf.Module) or Keras model in MLflow Model format to a path on the local file system.
Note
If you save a Keras or TensorFlow model without a signature, inference with mlflow.pyfunc.spark_udf()
will not work unless the modelâs pyfunc representation accepts pandas DataFrames as inference inputs. You can infer a modelâs signature by calling the mlflow.models.infer_signature()
API on features from the modelâs test dataset. You can also manually create a model signature, for example:
from mlflow.types.schema import Schema, TensorSpec from mlflow.models import ModelSignature import numpy as np input_schema = Schema( [ TensorSpec(np.dtype(np.uint64), (-1, 5), "field1"), TensorSpec(np.dtype(np.float32), (-1, 3, 2), "field2"), ] ) # Create the signature for a model that requires 2 inputs: # - Input with name "field1", shape (-1, 5), type "np.uint64" # - Input with name "field2", shape (-1, 3, 2), type "np.float32" signature = ModelSignature(inputs=input_schema)
model â The Keras model or Tensorflow module to be saved.
path â Local path where the MLflow model is to be saved.
conda_env â
Either a dictionary representation of a Conda environment or the path to a conda environment yaml file. If provided, this describes the environment this model should be run in. At a minimum, it should specify the dependencies contained in get_default_conda_env(). If None
, a conda environment with pip requirements inferred by mlflow.models.infer_pip_requirements()
is added to the model. If the requirement inference fails, it falls back to using get_default_pip_requirements. pip requirements from conda_env
are written to a pip requirements.txt
file and the full conda environment is written to conda.yaml
. The following is an example dictionary representation of a conda environment:
{ "name": "mlflow-env", "channels": ["conda-forge"], "dependencies": [ "python=3.8.15", { "pip": [ "tensorflow==x.y.z" ], }, ], }
code_paths â
A list of local filesystem paths to Python file dependencies (or directories containing file dependencies). These files are prepended to the system path when the model is loaded. Files declared as dependencies for a given model should have relative imports declared from a common root path if multiple files are defined with import dependencies between them to avoid import errors when loading the model.
For a detailed explanation of code_paths
functionality, recommended usage patterns and limitations, see the code_paths usage guide.
mlflow_model â MLflow model configuration to which to add the tensorflow
flavor.
custom_objects â A Keras custom_objects
dictionary mapping names (strings) to custom classes or functions associated with the Keras model. MLflow saves these custom layers using CloudPickle and restores them automatically when the model is loaded with mlflow.tensorflow.load_model()
and mlflow.pyfunc.load_model()
.
signature â
an instance of the ModelSignature
class that describes the modelâs inputs and outputs. If not specified but an input_example
is supplied, a signature will be automatically inferred based on the supplied input example and model. To disable automatic signature inference when providing an input example, set signature
to False
. To manually infer a model signature, call infer_signature()
on datasets with valid model inputs, such as a training dataset with the target column omitted, and valid model outputs, like model predictions made on the training dataset, for example:
from mlflow.models import infer_signature train = df.drop_column("target_label") predictions = ... # compute model predictions signature = infer_signature(train, predictions)
input_example â one or several instances of valid model input. The input example is used as a hint of what data to feed the model. It will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented format, or a numpy array where the example will be serialized to json by converting it to a list. Bytes are base64-encoded. When the signature
parameter is None
, the input example is used to infer a model signature.
pip_requirements â Either an iterable of pip requirement strings (e.g. ["tensorflow", "-r requirements.txt", "-c constraints.txt"]
) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"
). If provided, this describes the environment this model should be run in. If None
, a default list of requirements is inferred by mlflow.models.infer_pip_requirements()
from the current software environment. If the requirement inference fails, it falls back to using get_default_pip_requirements. Both requirements and constraints are automatically parsed and written to requirements.txt
and constraints.txt
files, respectively, and stored as part of the model. Requirements are also written to the pip
section of the modelâs conda environment (conda.yaml
) file.
extra_pip_requirements â
Either an iterable of pip requirement strings (e.g. ["pandas", "-r requirements.txt", "-c constraints.txt"]
) or the string path to a pip requirements file on the local filesystem (e.g. "requirements.txt"
). If provided, this describes additional pip requirements that are appended to a default set of pip requirements generated automatically based on the userâs current software environment. Both requirements and constraints are automatically parsed and written to requirements.txt
and constraints.txt
files, respectively, and stored as part of the model. Requirements are also written to the pip
section of the modelâs conda environment (conda.yaml
) file.
Warning
The following arguments canât be specified at the same time:
conda_env
pip_requirements
extra_pip_requirements
This example demonstrates how to specify pip requirements using pip_requirements
and extra_pip_requirements
.
saved_model_kwargs â a dict of kwargs to pass to tensorflow.saved_model.save
method if the model to be saved is a Tensorflow module.
keras_model_kwargs â a dict of kwargs to pass to model.save
method if the model to be saved is a keras model.
metadata â Custom metadata dictionary passed to the model and stored in the MLmodel file.
Callback for logging Tensorflow training metrics to MLflow.
This callback logs model information at training start, and logs training metrics every epoch or every n steps (defined by the user) to MLflow.
log_every_epoch â bool, If True, log metrics every epoch. If False, log metrics every n steps.
log_every_n_steps â int, log metrics every n steps. If None, log metrics every epoch. Must be None if log_every_epoch=True.
from tensorflow import keras import mlflow import numpy as np # Prepare data for a 2-class classification. data = tf.random.uniform([8, 28, 28, 3]) label = tf.convert_to_tensor(np.random.randint(2, size=8)) model = keras.Sequential( [ keras.Input([28, 28, 3]), keras.layers.Flatten(), keras.layers.Dense(2), ] ) model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=keras.optimizers.Adam(0.001), metrics=[keras.metrics.SparseCategoricalAccuracy()], ) with mlflow.start_run() as run: model.fit( data, label, batch_size=4, epochs=2, callbacks=[mlflow.keras.MlflowCallback(run)], )
Log metrics at the end of each batch with user specified frequency.
Log metrics at the end of each epoch.
Log validation metrics at validation end.
Log model architecture and optimizer configuration when training begins.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4