finetuner.tuner.keras package#


Module contents#

class finetuner.tuner.keras.KerasTuner(embed_model=None, loss='SiameseLoss', configure_optimizer=None, learning_rate=0.001, scheduler_step='batch', callbacks=None, device='cpu', **kwargs)[source]#

Bases: finetuner.tuner.base.BaseTuner[tensorflow.keras.layers.Layer, keras.engine.data_adapter.KerasSequenceAdapter, tensorflow.keras.optimizers.Optimizer, tensorflow.keras.optimizers.schedules.LearningRateSchedule]

Create the tuner instance.

  • embed_model (Optional[~AnyDNN]) – Model that produces embeddings from inputs.

  • loss (Union[BaseLoss, str]) – Either the loss object instance, or the name of the loss function. Currently available losses are SiameseLoss and TripletLoss.

  • configure_optimizer (Optional[Callable[[~AnyDNN], Union[~AnyOptimizer, Tuple[~AnyOptimizer, ~AnyScheduler]]]]) –

    A function that allows you to provide a custom optimizer and learning rate. The function should take one input - the embedding model, and return either just an optimizer or a tuple of an optimizer and a learning rate scheduler.

    For Keras, you should provide the learning rate scheduler directly to the optimizer using the learning_rate argument in its __init__ function - and this should be an instance of a subclass of tf.keras.optimizer.schedulers.LearningRateScheduler - and not an instance of the callback (tf.keras.callbacks.LearningRateScheduler).

  • learning_rate (float) – Learning rate for the default optimizer. If you provide a custom optimizer, this learning rate will not apply.

  • scheduler_step (str) –

    At which interval should the learning rate sheduler’s step function be called. Valid options are “batch” and “epoch”.

    For Keras, this option has no effect, as LearningRateScheduler instances are called by the optimizer on each step automatically.

  • callbacks (Optional[List[BaseCallback]]) – A list of callbacks. The progress bar callback will be pre-prended to this list.

  • device (str) – The device to which to move the model. Supported options are "cpu" and "cuda" (for GPU).

save(*args, **kwargs)[source]#

Save the embedding model.

You need to pass the path where to save the model in either args or kwargs (for filepath key).

  • args – Arguments to pass to save method of the embedding model.

  • kwargs – Keyword arguments to pass to save method of the embedding model.

state: finetuner.tuner.state.TunerState#