adanet

AdaNet: Fast and flexible AutoML with learning guarantees.

Estimators

High-level APIs for training, evaluating, predicting, and serving AdaNet model.

AutoEnsembleEstimator

class adanet.AutoEnsembleEstimator(head, candidate_pool, max_iteration_steps, ensemblers=None, ensemble_strategies=None, logits_fn=None, last_layer_fn=None, evaluator=None, metric_fn=None, force_grow=False, adanet_loss_decay=0.9, worker_wait_timeout_secs=7200, model_dir=None, config=None, debug=False, enable_ensemble_summaries=True, enable_subnetwork_summaries=True, global_step_combiner_fn=<function reduce_mean>, max_iterations=None, replay_config=None, **kwargs)[source]

Bases: adanet.core.estimator.Estimator

A tf.estimator.Estimator that learns to ensemble models.

Specifically, it learns to ensemble models from a candidate pool using the Adanet algorithm.

# A simple example of learning to ensemble linear and neural network
# models.

import adanet
import tensorflow as tf

feature_columns = ...

head = MultiClassHead(n_classes=10)

# Learn to ensemble linear and DNN models.
estimator = adanet.AutoEnsembleEstimator(
    head=head,
    candidate_pool=lambda config: {
        "linear":
            tf.estimator.LinearEstimator(
                head=head,
                feature_columns=feature_columns,
                config=config,
                optimizer=...),
        "dnn":
            tf.estimator.DNNEstimator(
                head=head,
                feature_columns=feature_columns,
                config=config,
                optimizer=...,
                hidden_units=[1000, 500, 100])},
    max_iteration_steps=50)

# Input builders
def input_fn_train:
  # Returns tf.data.Dataset of (x, y) tuple where y represents label's
  # class index.
  pass
def input_fn_eval:
  # Returns tf.data.Dataset of (x, y) tuple where y represents label's
  # class index.
  pass
def input_fn_predict:
  # Returns tf.data.Dataset of (x, None) tuple.
  pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)

Or to train candidate subestimators on different training data subsets:

train_data_files = [...]

# Learn to ensemble linear and DNN models.
estimator = adanet.AutoEnsembleEstimator(
    head=head,
    candidate_pool=lambda config: {
        "linear":
            adanet.AutoEnsembleSubestimator(
                tf.estimator.LinearEstimator(
                    head=head,
                    feature_columns=feature_columns,
                    config=config,
                    optimizer=...),
                make_train_input_fn(train_data_files[:-1])),
        "dnn":
            adanet.AutoEnsembleSubestimator(
                tf.estimator.DNNEstimator(
                    head=head,
                    feature_columns=feature_columns,
                    config=config,
                    optimizer=...,
                    hidden_units=[1000, 500, 100]),
                make_train_input_fn(train_data_files[0:]))},
    max_iteration_steps=50)

estimator.train(input_fn=make_train_input_fn(train_data_files), steps=100)
Parameters:
  • head – A tf.contrib.estimator.Head instance for computing loss and evaluation metrics for every candidate.
  • candidate_pool – List of tf.estimator.Estimator and AutoEnsembleSubestimator objects, or dict of string name to tf.estimator.Estimator and AutoEnsembleSubestimator objects that are candidate subestimators to ensemble at each iteration. The order does not directly affect which candidates will be included in the final ensemble, but will affect the name of the candidate. When using a dict, the string key becomes the candidate subestimator’s name. Alternatively, this argument can be a function that takes a config argument and returns the aforementioned values in case the objects need to be re-instantiated at each adanet iteration.
  • max_iteration_steps – Total number of steps for which to train candidates per iteration. If OutOfRange or StopIteration occurs in the middle, training stops before max_iteration_steps steps.
  • logits_fn

    A function for fetching the subnetwork logits from a tf.estimator.EstimatorSpec, which should obey the following signature:

    • Args: Can only have following argument: - estimator_spec: The candidate’s tf.estimator.EstimatorSpec.
    • Returns: Logits tf.Tensor or dict of string to logits tf.Tensor (for multi-head) for the candidate subnetwork extracted from the given estimator_spec. When None, it will default to returning estimator_spec.predictions when they are a tf.Tensor or the tf.Tensor for the key ‘logits’ when they are a dict of string to tf.Tensor.
  • last_layer_fn

    An optional function for fetching the subnetwork last_layer from a tf.estimator.EstimatorSpec, which should obey the following signature:

    • Args: Can only have following argument: - estimator_spec: The candidate’s tf.estimator.EstimatorSpec.
    • Returns: Last layer tf.Tensor or dict of string to last layer tf.Tensor (for multi-head) for the candidate subnetwork extracted from the given estimator_spec. The last_layer can be used for learning ensembles or exporting them as embeddings.

    When None, it will default to using the logits as the last_layer.

  • ensemblers – See adanet.Estimator.
  • ensemble_strategies – See adanet.Estimator.
  • evaluator – See adanet.Estimator.
  • metric_fn – See adanet.Estimator.
  • force_grow – See adanet.Estimator.
  • adanet_loss_decay – See adanet.Estimator.
  • worker_wait_timeout_secs – See adanet.Estimator.
  • model_dir – See adanet.Estimator.
  • config – See adanet.Estimator.
  • debug – See adanet.Estimator.
  • enable_ensemble_summaries – See adanet.Estimator.
  • enable_subnetwork_summaries – See adanet.Estimator.
  • global_step_combiner_fn – See adanet.Estimator.
  • max_iterations – See adanet.Estimator.
  • replay_config – See adanet.Estimator.
  • **kwargs – Extra keyword args passed to the parent.
Returns:

An adanet.AutoEnsembleEstimator instance.

Raises:

ValueError – If any of the candidates in candidate_pool are not tf.estimator.Estimator instances.

deprecation = <module 'tensorflow.python.util.deprecation' from '/home/docs/checkouts/readthedocs.org/user_builds/adanet/envs/v0.9.0/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py'>
eval_dir(name=None)

Shows the directory name where evaluation metrics are dumped.

Parameters:name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:A string which is the path of directory contains evaluation metrics.
evaluate(input_fn, steps=None, hooks=None, checkpoint_path=None, name=None)

Evaluates the model given evaluation data input_fn.

For each step, calls input_fn, which returns one batch of data. Evaluates until: - steps batches are processed, or - input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).

Parameters:
  • input_fn

    A function that constructs the input data for evaluation. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions)

    for more information. The

    function should construct and return one of the following: * A tf.data.Dataset object: Outputs of Dataset object must be a tuple

    (features, labels) with same constraints as below. * A tuple
    (features, labels): Where features is a tf.Tensor or a dictionary
    of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • steps – Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
  • checkpoint_path – Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
  • name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:

A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.

Raises:

ValueError – If steps <= 0.

experimental_export_all_saved_models(export_dir_base, input_receiver_fn_map, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None)

Exports a SavedModel with tf.MetaGraphDefs for each requested mode.

For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensor`s. Next, this method calls the `Estimator’s model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory.

For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures.

For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • input_receiver_fn_map – dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns:

The path to the exported directory as a bytes object.

Raises:

ValueError – if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found.

export_saved_model(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode='infer')

Exports inference graph as a SavedModel into the given dir.

For a detailed guide, see [Using SavedModel with Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators).

This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensor`s, and then calling this `Estimator’s model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session.

The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • serving_input_receiver_fn – A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
  • experimental_modetf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns:

The path to the exported directory as a bytes object.

Raises:
  • ValueError – if no serving_input_receiver_fn is provided, no
  • export_outputs are provided, or no checkpoint can be found.
export_savedmodel(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False)

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead.

get_variable_names()

Returns list of all variable names in this model.

Returns:List of names.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
get_variable_value(name)

Returns value of the variable given by name.

Parameters:name – string or a list of string, name of the tensor.
Returns:Numpy array - value of the tensor.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
latest_checkpoint()

Finds the filename of the latest saved checkpoint file in model_dir.

Returns:The full path to the latest checkpoint or None if no checkpoint was found.
model_fn

Returns the model_fn which is bound to self.params.

Returns:def model_fn(features, labels, mode, config)
Return type:The model_fn with following signature
predict(input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True)

Yields predictions for given features.

Please note that interleaving two predict outputs does not work. See: [issue/20506]( https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)

Parameters:
  • input_fn

    A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * tf.data.Dataset object – Outputs of Dataset object must have

    same constraints as below.
    • features – A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. * A tuple, in which case the first item is extracted as features.
  • predict_keys – list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
  • checkpoint_path – Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
  • yield_single_examples – If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields:

Evaluated values of predictions tensors.

Raises:
  • ValueError – If batch length of predictions is not the same and yield_single_examples is True.
  • ValueError – If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict.
train(input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)

Trains a model given training data input_fn.

NOTE: If a given input_fn raises an OutOfRangeError, then all of training will exit. The best practice is to make the training dataset repeat forever, in order to perform model search for more than one iteration.

Parameters:
  • input_fn

    A function that provides input data for training as minibatches. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following:

    • A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
    • A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
  • steps – Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don’t want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
  • max_steps – Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
  • saving_listeners – list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns:

self, for chaining.

Raises:
  • ValueError – If both steps and max_steps are not None.
  • ValueError – If either steps or max_steps <= 0.

AutoEnsembleSubestimator

class adanet.AutoEnsembleSubestimator[source]

Bases: adanet.autoensemble.common.AutoEnsembleSubestimator

A subestimator to train and consider for ensembling.

Parameters:
  • estimator – A tf.estimator.Estimator or tf.estimator.tpu.TPUEstimator instance to consider for ensembling.
  • train_input_fn
    A function that provides input data for training as
    minibatches. It can be used to implement ensemble methods like bootstrap aggregating (a.k.a bagging) where each subnetwork trains on different slices of the training data. The function should construct and return one of the following:
    • A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. NOTE: A Dataset
      must return at least two batches before hitting the end-of-input, otherwise all of training terminates.

      TODO: Figure out how to handle single-batch datasets.

    • A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by estimator#model_fn. They should satisfy the expectation of estimator#model_fn from inputs.
    prediction_only: If set to True, only runs the subestimator in prediction
    mode.
Returns:

An AutoEnsembleSubestimator instance to be auto-ensembled.

count()

Return number of occurrences of value.

estimator

Alias for field number 0

index()

Return first index of value.

Raises ValueError if the value is not present.

prediction_only

Alias for field number 2

train_input_fn

Alias for field number 1

AutoEnsembleTPUEstimator

class adanet.AutoEnsembleTPUEstimator(head, candidate_pool, max_iteration_steps, ensemblers=None, ensemble_strategies=None, logits_fn=None, last_layer_fn=None, evaluator=None, metric_fn=None, force_grow=False, adanet_loss_decay=0.9, model_dir=None, config=None, use_tpu=True, eval_on_tpu=True, export_to_tpu=True, train_batch_size=None, eval_batch_size=None, predict_batch_size=None, embedding_config_spec=None, debug=False, enable_ensemble_summaries=True, enable_subnetwork_summaries=True, global_step_combiner_fn=<function reduce_mean>, max_iterations=None, replay_config=None, **kwargs)[source]

Bases: adanet.core.tpu_estimator.TPUEstimator

A tf.estimator.tpu.TPUEstimator that learns to ensemble models.

Specifically, it learns to ensemble models from a candidate pool using the Adanet algorithm.

This estimator is capable of training and evaluating on TPU. It can ensemble both tf.estimator.tpu.TPUEstimator candidates as well as regular tf.estimator.Estimator candidates, as long as these candidates are TPU compatible.

Note the following restrictions compared to AutoEnsembleEstimator:
  • All candidates must wrap their optimizers with a tf.tpu.CrossShardOptimizer.
  • The input_fn must expose a params argument.
  • The model_fn of tf.estimator.tpu.TPUEstimator candidates must also expose a params argument.

WARNING: This Estimator is a work in progress and the API could change at any moment. May not support all AutoEnsembleEstimator features.

# A simple example of learning to ensemble linear and neural network
# models on TPU.

import adanet
import tensorflow as tf

feature_columns = ...

head = MultiClassHead(n_classes=10)

# Learn to ensemble linear and DNN models.
estimator = adanet.AutoEnsembleTPUEstimator(
    head=head,
    candidate_pool=lambda config: {
        "linear":
            tf.estimator.LinearEstimator(
                head=head,
                feature_columns=feature_columns,
                config=config,
                optimizer=tf.tpu.CrossShardOptimizer(...)),
        "dnn":
            tf.estimator.DNNEstimator(
                head=head,
                feature_columns=feature_columns,
                config=config,
                optimizer=tf.tpu.CrossShardOptimzier(...),
                hidden_units=[1000, 500, 100])},
    max_iteration_steps=50)

# Input builders
def input_fn_train(params):
  # Returns tf.data.Dataset of (x, y) tuple where y represents label's
  # class index.
  pass
def input_fn_eval(params):
  # Returns tf.data.Dataset of (x, y) tuple where y represents label's
  # class index.
  pass
def input_fn_predict():
  # Returns tf.data.Dataset of (x, None) tuple.
  pass
estimator.train(input_fn=input_fn_train, steps=100)
metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10)
predictions = estimator.predict(input_fn=input_fn_predict)
Parameters:
  • head – A tf.contrib.estimator.Head instance for computing loss and evaluation metrics for every candidate.
  • candidate_pool – List of tf.estimator.tpu.TPUEstimator and AutoEnsembleSubestimator objects, or dict of string name to tf.estimator.tpu.TPUEstimator and AutoEnsembleSubestimator objects that are candidate subestimators to ensemble at each iteration. The order does not directly affect which candidates will be included in the final ensemble, but will affect the name of the candidate. When using a dict, the string key becomes the candidate subestimator’s name. Alternatively, this argument can be a function that takes a config argument and returns the aforementioned values in case the objects need to be re-instantiated at each adanet iteration.
  • max_iteration_steps – See adanet.Estimator.
  • logits_fn

    A function for fetching the subnetwork logits from a tf.estimator.EstimatorSpec, which should obey the following signature:

    • Args: Can only have following argument: - estimator_spec: The candidate’s tf.estimator.EstimatorSpec.
    • Returns: Logits tf.Tensor or dict of string to logits tf.Tensor (for multi-head) for the candidate subnetwork extracted from the given estimator_spec. When None, it will default to returning estimator_spec.predictions when they are a tf.Tensor or the tf.Tensor for the key ‘logits’ when they are a dict of string to tf.Tensor.
  • last_layer_fn

    An optional function for fetching the subnetwork last_layer from a tf.estimator.EstimatorSpec, which should obey the following signature:

    • Args: Can only have following argument: - estimator_spec: The candidate’s tf.estimator.EstimatorSpec.
    • Returns: Last layer tf.Tensor or dict of string to last layer tf.Tensor (for multi-head) for the candidate subnetwork extracted from the given estimator_spec. The last_layer can be used for learning ensembles or exporting them as embeddings.

    When None, it will default to using the logits as the last_layer.

  • ensemblers – See adanet.Estimator.
  • ensemble_strategies – See adanet.Estimator.
  • evaluator – See adanet.Estimator.
  • metric_fn – See adanet.Estimator.
  • force_grow – See adanet.Estimator.
  • adanet_loss_decay – See adanet.Estimator.
  • model_dir – See adanet.Estimator.
  • config – See adanet.Estimator.
  • use_tpu – See adanet.Estimator.
  • eval_on_tpu – See adanet.Estimator.
  • export_to_tpu – See adanet.Estimator.
  • train_batch_size – See adanet.Estimator.
  • eval_batch_size – See adanet.Estimator.
  • embedding_config_spec – See adanet.Estimator.
  • debug – See adanet.Estimator.
  • enable_ensemble_summaries – See adanet.Estimator.
  • enable_subnetwork_summaries – See adanet.Estimator.
  • global_step_combiner_fn – See adanet.Estimator.
  • max_iterations – See adanet.Estimator.
  • replay_config – See adanet.Estimator.
  • **kwargs – Extra keyword args passed to the parent.
Returns:

An adanet.AutoEnsembleTPUEstimator instance.

Raises:

ValueError – If any of the candidates in candidate_pool are not tf.estimator.Estimator instances.

deprecation = <module 'tensorflow.python.util.deprecation' from '/home/docs/checkouts/readthedocs.org/user_builds/adanet/envs/v0.9.0/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py'>
eval_dir(name=None)

Shows the directory name where evaluation metrics are dumped.

Parameters:name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:A string which is the path of directory contains evaluation metrics.
evaluate(input_fn, steps=None, hooks=None, checkpoint_path=None, name=None)

Evaluates the model given evaluation data input_fn.

For each step, calls input_fn, which returns one batch of data. Evaluates until: - steps batches are processed, or - input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).

Parameters:
  • input_fn

    A function that constructs the input data for evaluation. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions)

    for more information. The

    function should construct and return one of the following: * A tf.data.Dataset object: Outputs of Dataset object must be a tuple

    (features, labels) with same constraints as below. * A tuple
    (features, labels): Where features is a tf.Tensor or a dictionary
    of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • steps – Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
  • checkpoint_path – Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
  • name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:

A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.

Raises:

ValueError – If steps <= 0.

experimental_export_all_saved_models(export_dir_base, input_receiver_fn_map, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None)

Exports a SavedModel with tf.MetaGraphDefs for each requested mode.

For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensor`s. Next, this method calls the `Estimator’s model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory.

For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures.

For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • input_receiver_fn_map – dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns:

The path to the exported directory as a bytes object.

Raises:

ValueError – if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found.

export_saved_model(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode='infer')

Exports inference graph as a SavedModel into the given dir.

For a detailed guide, see [Using SavedModel with Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators).

This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensor`s, and then calling this `Estimator’s model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session.

The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • serving_input_receiver_fn – A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
  • experimental_modetf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns:

The path to the exported directory as a bytes object.

Raises:
  • ValueError – if no serving_input_receiver_fn is provided, no
  • export_outputs are provided, or no checkpoint can be found.
export_savedmodel(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False)

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead.

get_variable_names()

Returns list of all variable names in this model.

Returns:List of names.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
get_variable_value(name)

Returns value of the variable given by name.

Parameters:name – string or a list of string, name of the tensor.
Returns:Numpy array - value of the tensor.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
latest_checkpoint()

Finds the filename of the latest saved checkpoint file in model_dir.

Returns:The full path to the latest checkpoint or None if no checkpoint was found.
model_fn

Returns the model_fn which is bound to self.params.

Returns:def model_fn(features, labels, mode, config)
Return type:The model_fn with following signature
predict(input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True)

Yields predictions for given features.

Please note that interleaving two predict outputs does not work. See: [issue/20506]( https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)

Parameters:
  • input_fn

    A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * tf.data.Dataset object – Outputs of Dataset object must have

    same constraints as below.
    • features – A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. * A tuple, in which case the first item is extracted as features.
  • predict_keys – list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
  • checkpoint_path – Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
  • yield_single_examples – If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields:

Evaluated values of predictions tensors.

Raises:
  • ValueError – If batch length of predictions is not the same and yield_single_examples is True.
  • ValueError – If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict.
train(input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)

Trains a model given training data input_fn.

NOTE: If a given input_fn raises an OutOfRangeError, then all of training will exit. The best practice is to make the training dataset repeat forever, in order to perform model search for more than one iteration.

Parameters:
  • input_fn

    A function that provides input data for training as minibatches. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following:

    • A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
    • A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
  • steps – Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don’t want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
  • max_steps – Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
  • saving_listeners – list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns:

self, for chaining.

Raises:
  • ValueError – If both steps and max_steps are not None.
  • ValueError – If either steps or max_steps <= 0.

Estimator

class adanet.Estimator(head, subnetwork_generator, max_iteration_steps, ensemblers=None, ensemble_strategies=None, evaluator=None, report_materializer=None, metric_fn=None, force_grow=False, replicate_ensemble_in_training=False, adanet_loss_decay=0.9, delay_secs_per_worker=5, max_worker_delay_secs=60, worker_wait_secs=5, worker_wait_timeout_secs=7200, model_dir=None, report_dir=None, config=None, debug=False, enable_ensemble_summaries=True, enable_subnetwork_summaries=True, global_step_combiner_fn=<function reduce_mean>, max_iterations=None, export_subnetwork_logits=False, export_subnetwork_last_layer=True, replay_config=None, **kwargs)[source]

Bases: tensorflow_estimator.python.estimator.estimator.EstimatorV2

A tf.estimator.Estimator for training, evaluation, and serving.

This implementation uses an adanet.subnetwork.Generator as its weak learning algorithm for generating candidate subnetworks. These are trained in parallel using a single graph per iteration. At the end of each iteration, the estimator saves the sub-graph of the best subnetwork ensemble and its weights as a separate checkpoint. At the beginning of the next iteration, the estimator imports the previous iteration’s frozen graph and adds ops for the next candidates as part of a new graph and session. This allows the estimator have the performance of Tensorflow’s static graph constraint (minus the performance hit of reconstructing a graph between iterations), while having the flexibility of having a dynamic graph.

NOTE: Subclassing tf.estimator.Estimator is only necessary to work with tf.estimator.train_and_evaluate() which asserts that the estimator argument is a tf.estimator.Estimator subclass. However, all training is delegated to a separate tf.estimator.Estimator instance. It is responsible for supporting both local and distributed training. As such, the adanet.Estimator is only responsible for bookkeeping across iterations.

Parameters:
  • head – A tf.contrib.estimator.Head instance for computing loss and evaluation metrics for every candidate.
  • subnetwork_generator – The adanet.subnetwork.Generator which defines the candidate subnetworks to train and evaluate at every AdaNet iteration.
  • max_iteration_steps – Total number of steps for which to train candidates per iteration. If OutOfRange or StopIteration occurs in the middle, training stops before max_iteration_steps steps. When None, it will train the current iteration forever.
  • ensemblers – An iterable of adanet.ensemble.Ensembler objects that define how to ensemble a group of subnetworks. If there are multiple, each should have a different name property.
  • ensemble_strategies – An iterable of adanet.ensemble.Strategy objects that define the candidate ensembles of subnetworks to explore at each iteration.
  • evaluator – An adanet.Evaluator for candidate selection after all subnetworks are done training. When None, candidate selection uses a moving average of their adanet.Ensemble AdaNet loss during training instead. In order to use the AdaNet algorithm as described in [Cortes et al., ‘17], the given adanet.Evaluator must be created with the same dataset partition used during training. Otherwise, this framework will perform AdaNet.HoldOut which uses a holdout set for candidate selection, but does not benefit from learning guarantees.
  • report_materializer – An adanet.ReportMaterializer. Its reports are made available to the subnetwork_generator at the next iteration, so that it can adapt its search space. When None, the subnetwork_generator generate_candidates() method will receive empty Lists for their previous_ensemble_reports and all_reports arguments.
  • metric_fn

    A function for adding custom evaluation metrics, which should obey the following signature:

    • Args: Can only have the following three arguments in any order: - predictions: Predictions Tensor or dict of Tensor
      created by given head.
      • features: Input dict of Tensor objects created by input_fn which is given to estimator.evaluate() as an argument.
      • labels: Labels Tensor or dict of Tensor (for multi-head) created by input_fn which is given to estimator.evaluate() as an argument.
    • Returns: Dict of metric results keyed by name. Final metrics are a union of this and head’s existing metrics. If there is a name conflict between this and head`s existing metrics, this will override the existing one. The values of the dict are the results of calling a metric function, namely a :code:`(metric_tensor, update_op) tuple.
  • force_grow – Boolean override that forces the ensemble to grow by one subnetwork at the end of each iteration. Normally at the end of each iteration, AdaNet selects the best candidate ensemble according to its performance on the AdaNet objective. In some cases, the best ensemble is the previous_ensemble as opposed to one that includes a newly trained subnetwork. When True, the algorithm will not select the previous_ensemble as the best candidate, and will ensure that after n iterations the final ensemble is composed of n subnetworks.
  • replicate_ensemble_in_training – Whether to rebuild the frozen subnetworks of the ensemble in training mode, which can change the outputs of the frozen subnetworks in the ensemble. When False and during candidate training, the frozen subnetworks in the ensemble are in prediction mode, so training-only ops like dropout are not applied to them. When True and training the candidates, the frozen subnetworks will be in training mode as well, so they will apply training-only ops like dropout. This argument is useful for regularizing learning mixture weights, or for making training-only side inputs available in subsequent iterations. For most use-cases, this should be False.
  • adanet_loss_decay – Float decay for the exponential-moving-average of the AdaNet objective throughout training. This moving average is a data- driven way tracking the best candidate with only the training set.
  • delay_secs_per_worker – Float number of seconds to delay starting the i-th worker. Staggering worker start-up during distributed asynchronous SGD can improve training stability and speed up convergence. Each worker will wait (i+1) * delay_secs_per_worker seconds before beginning training.
  • max_worker_delay_secs – Float max number of seconds to delay starting the i-th worker. Staggering worker start-up during distributed asynchronous SGD can improve training stability and speed up convergence. Each worker will wait up to max_worker_delay_secs before beginning training.
  • worker_wait_secs – Float number of seconds for workers to wait before checking if the chief prepared the next iteration.
  • worker_wait_timeout_secs – Float number of seconds for workers to wait for chief to prepare the next iteration during distributed training. This is needed to prevent workers waiting indefinitely for a chief that may have crashed or been turned down. When the timeout is exceeded, the worker exits the train loop. In situations where the chief job is much slower than the worker jobs, this timeout should be increased.
  • model_dir – Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
  • report_dir – Directory where the adanet.subnetwork.MaterializedReport`s materialized by :code:`report_materializer would be saved. If report_materializer is None, this will not save anything. If None or empty string, defaults to <model_dir>/report.
  • configRunConfig object to configure the runtime settings.
  • debug – Boolean to enable debug mode which will check features and labels for Infs and NaNs.
  • enable_ensemble_summaries – Whether to record summaries to display in TensorBoard for each ensemble candidate. Disable to reduce memory and disk usage per run.
  • enable_subnetwork_summaries – Whether to record summaries to display in TensorBoard for each subnetwork. Disable to reduce memory and disk usage per run.
  • global_step_combiner_fn – Function for combining each subnetwork’s iteration step into the global step. By default it is the average of all subnetwork iteration steps, which may affect the global_steps/sec as subnetworks early stop and no longer increase their iteration step.
  • max_iterations – Integer maximum number of AdaNet iterations (a.k.a. rounds) of generating new subnetworks and ensembles, training them, and evaluating them against the current best ensemble. When None, AdaNet will keep iterating until Estimator#train terminates. Otherwise, if max_iteratios is supplied and is met or exceeded during training, training will terminate even before steps or max_steps.
  • export_subnetwork_logits – Whether to include subnetwork logits in exports.
  • export_subnetwork_last_layer – Whether to include subnetwork last layer in exports.
  • replay_config – Optional adanet.replay.Config to specify a previous AdaNet run to replay. Given the exact same search space but potentially different training data, the replay_config causes the estimator to reconstruct the previously trained model without performing a search. NOTE: The previous run must have executed with identical hyperparameters as the new run in order to be replayable. The only supported difference is that the underlying data can change.
  • **kwargs – Extra keyword args passed to the parent.
Returns:

An adanet.Estimator instance.

Raises:
  • ValueError – If subnetwork_generator is None.
  • ValueError – If max_iteration_steps is <= 0.
  • ValueError – If model_dir is not specified during distributed training.
  • ValueError – If max_iterations is <= 0.
deprecation = <module 'tensorflow.python.util.deprecation' from '/home/docs/checkouts/readthedocs.org/user_builds/adanet/envs/v0.9.0/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py'>
eval_dir(name=None)[source]

Shows the directory name where evaluation metrics are dumped.

Parameters:name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:A string which is the path of directory contains evaluation metrics.
evaluate(input_fn, steps=None, hooks=None, checkpoint_path=None, name=None)[source]

Evaluates the model given evaluation data input_fn.

For each step, calls input_fn, which returns one batch of data. Evaluates until: - steps batches are processed, or - input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).

Parameters:
  • input_fn

    A function that constructs the input data for evaluation. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions)

    for more information. The

    function should construct and return one of the following: * A tf.data.Dataset object: Outputs of Dataset object must be a tuple

    (features, labels) with same constraints as below. * A tuple
    (features, labels): Where features is a tf.Tensor or a dictionary
    of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • steps – Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
  • checkpoint_path – Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
  • name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:

A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.

Raises:

ValueError – If steps <= 0.

experimental_export_all_saved_models(export_dir_base, input_receiver_fn_map, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None)[source]

Exports a SavedModel with tf.MetaGraphDefs for each requested mode.

For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensor`s. Next, this method calls the `Estimator’s model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory.

For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures.

For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • input_receiver_fn_map – dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns:

The path to the exported directory as a bytes object.

Raises:

ValueError – if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found.

export_saved_model(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode='infer')[source]

Exports inference graph as a SavedModel into the given dir.

For a detailed guide, see [Using SavedModel with Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators).

This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensor`s, and then calling this `Estimator’s model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session.

The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • serving_input_receiver_fn – A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
  • experimental_modetf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns:

The path to the exported directory as a bytes object.

Raises:
  • ValueError – if no serving_input_receiver_fn is provided, no
  • export_outputs are provided, or no checkpoint can be found.
export_savedmodel(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False)[source]

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead.

get_variable_names()[source]

Returns list of all variable names in this model.

Returns:List of names.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
get_variable_value(name)[source]

Returns value of the variable given by name.

Parameters:name – string or a list of string, name of the tensor.
Returns:Numpy array - value of the tensor.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
latest_checkpoint()[source]

Finds the filename of the latest saved checkpoint file in model_dir.

Returns:The full path to the latest checkpoint or None if no checkpoint was found.
model_fn

Returns the model_fn which is bound to self.params.

Returns:def model_fn(features, labels, mode, config)
Return type:The model_fn with following signature
predict(input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True)[source]

Yields predictions for given features.

Please note that interleaving two predict outputs does not work. See: [issue/20506]( https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)

Parameters:
  • input_fn

    A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * tf.data.Dataset object – Outputs of Dataset object must have

    same constraints as below.
    • features – A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. * A tuple, in which case the first item is extracted as features.
  • predict_keys – list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
  • checkpoint_path – Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
  • yield_single_examples – If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields:

Evaluated values of predictions tensors.

Raises:
  • ValueError – If batch length of predictions is not the same and yield_single_examples is True.
  • ValueError – If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict.
train(input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)[source]

Trains a model given training data input_fn.

NOTE: If a given input_fn raises an OutOfRangeError, then all of training will exit. The best practice is to make the training dataset repeat forever, in order to perform model search for more than one iteration.

Parameters:
  • input_fn

    A function that provides input data for training as minibatches. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following:

    • A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
    • A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
  • steps – Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don’t want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
  • max_steps – Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
  • saving_listeners – list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns:

self, for chaining.

Raises:
  • ValueError – If both steps and max_steps are not None.
  • ValueError – If either steps or max_steps <= 0.

TPUEstimator

class adanet.TPUEstimator(head, subnetwork_generator, max_iteration_steps, ensemblers=None, ensemble_strategies=None, evaluator=None, report_materializer=None, metric_fn=None, force_grow=False, replicate_ensemble_in_training=False, adanet_loss_decay=0.9, model_dir=None, report_dir=None, config=None, use_tpu=True, eval_on_tpu=True, export_to_tpu=True, train_batch_size=None, eval_batch_size=None, predict_batch_size=None, embedding_config_spec=None, debug=False, enable_ensemble_summaries=True, enable_subnetwork_summaries=True, export_subnetwork_logits=False, export_subnetwork_last_layer=True, global_step_combiner_fn=<function reduce_mean>, max_iterations=None, replay_config=None, add_predict_batch_config=True, **kwargs)[source]

Bases: adanet.core.estimator.Estimator, tensorflow_estimator.python.estimator.tpu.tpu_estimator.TPUEstimator

An adanet.Estimator capable of training and evaluating on TPU.

Unless use_tpu=False, training will run on TPU. However, certain parts of the AdaNet training loop, such as report materialization and best candidate selection, will still occurr on CPU. Furthermore, if using TPUEmbedding (i.e. embedding_config_spec is supplied), inference will also occurr on CPU.

TODO: Provide the missing functionality detailed below. N.B: Embeddings using the TPUEmbedding (i.e. embedding_config_spec is provided) only support shared_embedding_columns when running for multiple AdaNet iterations. Using regular embedding_columns will cause iterations 2..n to fail because of mismatched embedding scopes.

Parameters:
  • head – See adanet.Estimator.
  • subnetwork_generator – See adanet.Estimator.
  • max_iteration_steps – See adanet.Estimator.
  • ensemblers – See adanet.Estimator.
  • ensemble_strategies – See adanet.Estimator.
  • evaluator – See adanet.Estimator.
  • report_materializer – See adanet.Estimator.
  • metric_fn – See adanet.Estimator.
  • force_grow – See adanet.Estimator.
  • replicate_ensemble_in_training – See adanet.Estimator.
  • adanet_loss_decay – See adanet.Estimator.
  • report_dir – See adanet.Estimator.
  • config – See adanet.Estimator.
  • use_tpu – Boolean to enable training on TPU. Defaults to True and is only provided to allow debugging models on CPU/GPU. Use adanet.Estimator instead if you do not plan to run on TPU.
  • eval_on_tpu – Boolean to enable evaluating on TPU. Defaults to True. Ignored if use_tpu=False.
  • export_to_tpu – See tf.compat.v1.estimator.tpu.TPUEstimator.
  • train_batch_size – See tf.compat.v1.estimator.tpu.TPUEstimator. Defaults to 0 if None.
  • eval_batch_size – See tf.compat.v1.estimator.tpu.TPUEstimator. Defaults to train_batch_size if None.
  • predict_batch_size – See tf.compat.v1.estimator.tpu.TPUEstimator. Defaults to eval_batch_size if None.
  • embedding_config_spec

    See tf.compat.v1.estimator.tpu.TPUEstimator. If supplied, predict will be called on CPU and no TPU compatible

    SavedModel will be exported.
  • debug – See adanet.Estimator.
  • enable_ensemble_summaries – See adanet.Estimator.
  • enable_subnetwork_summaries – See adanet.Estimator.
  • export_subnetwork_logits – Whether to include subnetwork logits in exports.
  • export_subnetwork_last_layer – Whether to include subnetwork last layer in exports.
  • global_step_combiner_fn – See adanet.Estimator.
  • max_iterations – See adanet.Estimator.
  • replay_config – See adanet.Estimator.
  • add_predict_batch_config – If True, supplies a default tpu_estimator.BatchConfig when calling tpu_estimator.model_fn_inference_on_tpu, otherwise supplies None.
  • **kwargs – Extra keyword args passed to the parent.
deprecation = <module 'tensorflow.python.util.deprecation' from '/home/docs/checkouts/readthedocs.org/user_builds/adanet/envs/v0.9.0/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py'>
eval_dir(name=None)

Shows the directory name where evaluation metrics are dumped.

Parameters:name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:A string which is the path of directory contains evaluation metrics.
evaluate(input_fn, steps=None, hooks=None, checkpoint_path=None, name=None)

Evaluates the model given evaluation data input_fn.

For each step, calls input_fn, which returns one batch of data. Evaluates until: - steps batches are processed, or - input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration).

Parameters:
  • input_fn

    A function that constructs the input data for evaluation. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions)

    for more information. The

    function should construct and return one of the following: * A tf.data.Dataset object: Outputs of Dataset object must be a tuple

    (features, labels) with same constraints as below. * A tuple
    (features, labels): Where features is a tf.Tensor or a dictionary
    of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • steps – Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
  • checkpoint_path – Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint.
  • name – Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
Returns:

A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean.

Raises:

ValueError – If steps <= 0.

experimental_export_all_saved_models(export_dir_base, input_receiver_fn_map, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None)

Exports a SavedModel with tf.MetaGraphDefs for each requested mode.

For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensor`s. Next, this method calls the `Estimator’s model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory.

For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures.

For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • input_receiver_fn_map – dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
Returns:

The path to the exported directory as a bytes object.

Raises:

ValueError – if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found.

export_saved_model(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode='infer')

Exports inference graph as a SavedModel into the given dir.

For a detailed guide, see [Using SavedModel with Estimators](https://tensorflow.org/guide/saved_model#using_savedmodel_with_estimators).

This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensor`s, and then calling this `Estimator’s model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session.

The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutput`s, and the inputs are always the input receivers provided by the `serving_input_receiver_fn.

Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {‘my_asset_file.txt’: ‘/path/to/my_asset_file.txt’}.

The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs.

Parameters:
  • export_dir_base – A string containing a directory in which to create timestamped subdirectories containing exported `SavedModel`s.
  • serving_input_receiver_fn – A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver.
  • assets_extra – A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed.
  • as_text – whether to write the SavedModel proto in text format.
  • checkpoint_path – The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen.
  • experimental_modetf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental.
Returns:

The path to the exported directory as a bytes object.

Raises:
  • ValueError – if no serving_input_receiver_fn is provided, no
  • export_outputs are provided, or no checkpoint can be found.
export_savedmodel(export_dir_base, serving_input_receiver_fn, hooks=None, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False)

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead.

get_variable_names()

Returns list of all variable names in this model.

Returns:List of names.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
get_variable_value(name)

Returns value of the variable given by name.

Parameters:name – string or a list of string, name of the tensor.
Returns:Numpy array - value of the tensor.
Raises:ValueError – If the Estimator has not produced a checkpoint yet.
latest_checkpoint()

Finds the filename of the latest saved checkpoint file in model_dir.

Returns:The full path to the latest checkpoint or None if no checkpoint was found.
model_fn

Returns the model_fn which is bound to self.params.

Returns:def model_fn(features, labels, mode, config)
Return type:The model_fn with following signature
predict(input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True)[source]

Yields predictions for given features.

Please note that interleaving two predict outputs does not work. See: [issue/20506]( https://github.com/tensorflow/tensorflow/issues/20506#issuecomment-422208517)

Parameters:
  • input_fn

    A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: * tf.data.Dataset object – Outputs of Dataset object must have

    same constraints as below.
    • features – A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. * A tuple, in which case the first item is extracted as features.
  • predict_keys – list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call.
  • checkpoint_path – Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint.
  • yield_single_examples – If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size.
Yields:

Evaluated values of predictions tensors.

Raises:
  • ValueError – If batch length of predictions is not the same and yield_single_examples is True.
  • ValueError – If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict.
train(input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)

Trains a model given training data input_fn.

NOTE: If a given input_fn raises an OutOfRangeError, then all of training will exit. The best practice is to make the training dataset repeat forever, in order to perform model search for more than one iteration.

Parameters:
  • input_fn

    A function that provides input data for training as minibatches. See [Premade Estimators]( https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following:

    • A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
    • A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
  • hooks – List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop.
  • steps – Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don’t want to have incremental behavior please set max_steps instead. If set, max_steps must be None.
  • max_steps – Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps.
  • saving_listeners – list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings.
Returns:

self, for chaining.

Raises:
  • ValueError – If both steps and max_steps are not None.
  • ValueError – If either steps or max_steps <= 0.

Evaluator

Measures adanet.Ensemble performance on a given dataset.

Evaluator

class adanet.Evaluator(input_fn, metric_name='adanet_loss', objective='minimize', steps=None)[source]

Evaluates candidate ensemble performance.

class Objective[source]

The Evaluator objective for the metric being optimized.

Two objectives are currently supported:
  • MINIMIZE: Lower is better for the metric being optimized.
  • MAXIMIZE: Higher is better for the metric being optimized.
__init__(input_fn, metric_name='adanet_loss', objective='minimize', steps=None)[source]

Initializes a new Evaluator instance.

Parameters:
  • input_fn – Input function returning a tuple of: features - Dictionary of string feature name to Tensor. labels - Tensor of labels.
  • metric_name – The name of the evaluation metrics to use when choosing the best ensemble. Must refer to a valid evaluation metric.
  • objective – Either Objective.MINIMIZE or Objective.MAXIMIZE.
  • steps – Number of steps for which to evaluate the ensembles. If an OutOfRangeError occurs, evaluation stops. If set to None, will iterate the dataset until all inputs are exhausted.
Returns:

An adanet.Evaluator instance.

evaluate(sess, ensemble_metrics)[source]

Evaluates the given AdaNet objectives on the data from input_fn.

The candidates are fed the same batches of features and labels as provided by input_fn, and their losses are computed and summed over steps batches.

Parameters:
  • sessSession instance with most recent variable values loaded.
  • ensemble_metrics – A list dictionaries of tf.metrics for each candidate ensemble.
Returns:

List of evaluated metrics.

input_fn

Return the input_fn.

metric_name

Returns the name of the metric being optimized.

objective_fn

Returns a fn which selects the best metric based on the objective.

steps

Return the number of evaluation steps.

Keras

Experimental Keras API for training, evaluating, predicting, and serving AdaNet models.

AutoEnsemble

Model

Summary

Extends tf.summary to power AdaNet’s TensorBoard integration.

Summary

class adanet.Summary[source]

Interface for writing summaries to Tensorboard.

audio(name, tensor, sample_rate, max_outputs=3, family=None, encoding=None, description=None)[source]

Writes an audio summary.

Parameters:
  • name – A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes.
  • tensor – A Tensor representing audio data with shape [k, t, c], where k is the number of audio clips, t is the number of frames, and c is the number of channels. Elements should be floating-point values in [-1.0, 1.0]. Any of the dimensions may be statically unknown (i.e., None).
  • sample_rate – An int or rank-0 int32 Tensor that represents the sample rate, in Hz. Must be positive.
  • max_outputs – Optional int or rank-0 integer Tensor. At most this many audio clips will be emitted at each step. When more than max_outputs many clips are provided, the first max_outputs many clips will be used and the rest silently discarded.
  • family – Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. DEPRECATED in TF 2.
  • encoding – Optional constant str for the desired encoding. Only “wav” is currently supported, but this is not guaranteed to remain the default, so if you want “wav” in particular, set this explicitly.
  • description – Optional long-form description for this summary, as a constant str. Markdown is supported. Defaults to empty.
Returns:

A scalar Tensor of type string. The serialized tf.Summary protocol buffer.

histogram(name, values, family=None, buckets=None, description=None)[source]

Outputs a tf.Summary protocol buffer with a histogram.

Adding a histogram summary makes it possible to visualize your data’s distribution in TensorBoard. You can see a detailed explanation of the TensorBoard histogram dashboard [here](https://www.tensorflow.org/get_started/tensorboard_histograms).

The generated [tf.Summary]( tensorflow/core/framework/summary.proto) has one summary value containing a histogram for values.

This op reports an InvalidArgument error if any value is not finite.

Parameters:
  • name – A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes.
  • values – A Tensor of any shape. Must be castable to float64.
  • family – Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. DEPRECATED in TF 2.
  • buckets – Optional positive int. The output will have this many buckets, except in two edge cases. If there is no data, then there are no buckets. If there is data but all points have the same value, then there is one bucket whose left and right endpoints are the same.
  • description – Optional long-form description for this summary, as a constant str. Markdown is supported. Defaults to empty.
Returns:

A scalar Tensor of type string. The serialized tf.Summary protocol buffer.

image(name, tensor, max_outputs=3, family=None, description=None)[source]

Outputs a tf.Summary protocol buffer with images.

The summary has up to max_outputs summary values containing images. The images are built from tensor which must be 4-D with shape [batch_size, height, width, channels] and where channels can be:

  • 1: tensor is interpreted as Grayscale.
  • 3: tensor is interpreted as RGB.
  • 4: tensor is interpreted as RGBA.

The images have the same number of channels as the input tensor. For float input, the values are normalized one image at a time to fit in the range [0, 255]. uint8 values are unchanged. The op uses two different normalization algorithms:

  • If the input values are all positive, they are rescaled so the largest

one is 255. * If any input value is negative, the values are shifted so input value 0.0

is at 127. They are then rescaled so that either the smallest value is 0, or the largest one is 255.

The tag in the outputted tf.Summary.Value protobufs is generated based on the name, with a suffix depending on the max_outputs setting:

  • If max_outputs is 1, the summary value tag is ‘name/image’.
  • If max_outputs is greater than 1, the summary value tags are
generated sequentially as ‘name/image/0’, ‘name/image/1’, etc.
Parameters:
  • name – A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes.
  • tensor – A Tensor representing pixel data with shape [k, h, w, c], where k is the number of images, h and w are the height and width of the images, and c is the number of channels, which should be 1, 2, 3, or 4 (grayscale, grayscale with alpha, RGB, RGBA). Any of the dimensions may be statically unknown (i.e., None). Floating point data will be clipped to the range [0,1).
  • max_outputs – Optional int or rank-0 integer Tensor. At most this many images will be emitted at each step. When more than max_outputs many images are provided, the first max_outputs many images will be used and the rest silently discarded.
  • family – Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. DEPRECATED in TF 2.
  • description – Optional long-form description for this summary, as a constant str. Markdown is supported. Defaults to empty.
Returns:

A scalar Tensor of type string. The serialized tf.Summary protocol buffer.

scalar(name, tensor, family=None, description=None)[source]

Outputs a tf.Summary protocol buffer containing a single scalar value.

The generated tf.Summary has a Tensor.proto containing the input Tensor.

Parameters:
  • name – A name for this summary. The summary tag used for TensorBoard will be this name prefixed by any active name scopes.
  • tensor – A real numeric scalar value, convertible to a float32 Tensor.
  • family – Optional; if provided, used as the prefix of the summary tag name, which controls the tab name used for display on Tensorboard. DEPRECATED in TF 2.
  • description – Optional long-form description for this summary, as a constant str. Markdown is supported. Defaults to empty.
Returns:

A scalar Tensor of type string. Which contains a tf.Summary protobuf.

Raises:

ValueError – If tensor has the wrong shape or type.

ReportMaterializer

ReportMaterializer

class adanet.ReportMaterializer(input_fn, steps=None)[source]

Materializes reports.

Specifically it materializes a subnetwork’s adanet.subnetwork.Report instances into adanet.subnetwork.MaterializedReport instances.

Requires an input function input_fn that returns a tuple of:

  • features: Dictionary of string feature name to Tensor.
  • labels: Tensor of labels.
Parameters:
  • input_fn – The input function.
  • steps – Number of steps for which to materialize the ensembles. If an OutOfRangeError occurs, materialization stops. If set to None, will iterate the dataset until all inputs are exhausted.
Returns:

A ReportMaterializer instance.

input_fn

Returns the input_fn that materialize_subnetwork_reports would run on.

Even though this property appears to be unused, it would be used to build the AdaNet model graph inside AdaNet estimator.train(). After the graph is built, the queue_runners are started and the initializers are run, AdaNet estimator.train() passes its tf.Session as an argument to materialize_subnetwork_reports(), thus indirectly making input_fn available to materialize_subnetwork_reports.

materialize_subnetwork_reports(sess, iteration_number, subnetwork_reports, included_subnetwork_names)[source]

Materializes the Tensor objects in subnetwork_reports using sess.

This converts the Tensors in subnetwork_reports to ndarrays, logs the progress, converts the ndarrays to python primitives, then packages them into adanet.subnetwork.MaterializedReports.

Parameters:
  • sessSession instance with most recent variable values loaded.
  • iteration_number – Integer iteration number.
  • subnetwork_reports – Dict mapping string names to subnetwork.Report objects to be materialized.
  • included_subnetwork_names – List of string names of the `subnetwork.Report`s that are included in the final ensemble.
Returns:

List of adanet.subnetwork.MaterializedReport objects.

steps

Return the number of steps.